0
Your cart

Your cart is empty

Browse All Departments
  • All Departments
Price
  • R2,500 - R5,000 (10)
  • -
Status
Brand

Showing 1 - 10 of 10 matches in All Departments

Quasidifferentiability and Related Topics (Hardcover, 2000 ed.): Vladimir F. Dem'yanov, Alexander M. Rubinov Quasidifferentiability and Related Topics (Hardcover, 2000 ed.)
Vladimir F. Dem'yanov, Alexander M. Rubinov
R4,351 Discovery Miles 43 510 Ships in 12 - 17 working days

2 Radiant sets 236 3 Co-radiant sets 239 4 Radiative and co-radiative sets 241 5 Radiant sets with Lipschitz continuous Minkowski gauges 245 6 Star-shaped sets and their kernels 249 7 Separation 251 8 Abstract convex star-shaped sets 255 References 260 11 DIFFERENCES OF CONVEX COMPACTA AND METRIC SPACES OF CON- 263 VEX COMPACTA WITH APPLICATIONS: A SURVEY A. M. Rubinov, A. A. Vladimirov 1 Introduction 264 2 Preliminaries 264 3 Differences of convex compact sets: general approach 266 4 Metric projections and corresponding differences (one-dimensional case) 267 5 The *-difference 269 6 The Demyanov difference 271 7 Geometric and inductive definitions of the D-difference 273 8 Applications to DC and quasidifferentiable functions 276 9 Differences of pairs of set-valued mappings with applications to quasidiff- entiability 278 10 Applications to approximate subdifferentials 280 11 Applications to the approximation of linear set-valued mappings 281 12 The Demyanov metric 282 13 The Bartels-Pallaschke metric 284 14 Hierarchy of the three norms on Qn 285 15 Derivatives 287 16 Distances from convex polyhedra and convergence of convex polyhedra 289 17 Normality of convex sets 290 18 D-regular sets 291 19 Variable D-regular sets 292 20 Optimization 293 References 294 12 CONVEX APPROXIMATORS.

Optimization and Related Topics (Hardcover, 2001 ed.): Alexander M. Rubinov, Barney M. Glover Optimization and Related Topics (Hardcover, 2001 ed.)
Alexander M. Rubinov, Barney M. Glover
R4,365 Discovery Miles 43 650 Ships in 12 - 17 working days

This volume contains, in part, a selection of papers presented at the sixth Australian Optimization Day Miniconference (Ballarat, 16 July 1999), and the Special Sessions on Nonlinear Dynamics and Optimization and Operations Re search - Methods and Applications, which were held in Melbourne, July 11-15 1999 as a part of the Joint Meeting of the American Mathematical Society and Australian Mathematical Society. The editors have strived to present both con tributed papers and survey style papers as a more interesting mix for readers. Some participants from the meetings mentioned above have responded to this approach by preparing survey and 'semi-survey' papers, based on presented lectures. Contributed paper, which contain new and interesting results, are also included. The fields of the presented papers are very large as demonstrated by the following selection of key words from selected papers in this volume: * optimal control, stochastic optimal control, MATLAB, economic models, implicit constraints, Bellman principle, Markov process, decision-making under uncertainty, risk aversion, dynamic programming, optimal value function. * emergent computation, complexity, traveling salesman problem, signal estimation, neural networks, time congestion, teletraffic. * gap functions, nonsmooth variational inequalities, derivative-free algo rithm, Newton's method. * auxiliary function, generalized penalty function, modified Lagrange func tion. * convexity, quasiconvexity, abstract convexity.

Lagrange-type Functions in Constrained Non-Convex Optimization (Hardcover, 2003 ed.): Alexander M. Rubinov, Xiao-qi Yang Lagrange-type Functions in Constrained Non-Convex Optimization (Hardcover, 2003 ed.)
Alexander M. Rubinov, Xiao-qi Yang
R2,961 Discovery Miles 29 610 Ships in 10 - 15 working days

Lagrange and penalty function methods provide a powerful approach, both as a theoretical tool and a computational vehicle, for the study of constrained optimization problems. However, for a nonconvex constrained optimization problem, the classical Lagrange primal-dual method may fail to find a mini mum as a zero duality gap is not always guaranteed. A large penalty parameter is, in general, required for classical quadratic penalty functions in order that minima of penalty problems are a good approximation to those of the original constrained optimization problems. It is well-known that penaity functions with too large parameters cause an obstacle for numerical implementation. Thus the question arises how to generalize classical Lagrange and penalty functions, in order to obtain an appropriate scheme for reducing constrained optimiza tion problems to unconstrained ones that will be suitable for sufficiently broad classes of optimization problems from both the theoretical and computational viewpoints. Some approaches for such a scheme are studied in this book. One of them is as follows: an unconstrained problem is constructed, where the objective function is a convolution of the objective and constraint functions of the original problem. While a linear convolution leads to a classical Lagrange function, different kinds of nonlinear convolutions lead to interesting generalizations. We shall call functions that appear as a convolution of the objective function and the constraint functions, Lagrange-type functions."

Abstract Convexity and Global Optimization (Hardcover, 2000 ed.): Alexander M. Rubinov Abstract Convexity and Global Optimization (Hardcover, 2000 ed.)
Alexander M. Rubinov
R5,222 R4,373 Discovery Miles 43 730 Save R849 (16%) Ships in 12 - 17 working days

Special tools are required for examining and solving optimization problems. The main tools in the study of local optimization are classical calculus and its modern generalizions which form nonsmooth analysis. The gradient and various kinds of generalized derivatives allow us to ac complish a local approximation of a given function in a neighbourhood of a given point. This kind of approximation is very useful in the study of local extrema. However, local approximation alone cannot help to solve many problems of global optimization, so there is a clear need to develop special global tools for solving these problems. The simplest and most well-known area of global and simultaneously local optimization is convex programming. The fundamental tool in the study of convex optimization problems is the subgradient, which actu ally plays both a local and global role. First, a subgradient of a convex function f at a point x carries out a local approximation of f in a neigh bourhood of x. Second, the subgradient permits the construction of an affine function, which does not exceed f over the entire space and coincides with f at x. This affine function h is called a support func tion. Since f(y) ~ h(y) for ally, the second role is global. In contrast to a local approximation, the function h will be called a global affine support.

Continuous Optimization - Current Trends and Modern Applications (Hardcover, 2005 ed.): V. Jeyakumar, Alexander M. Rubinov Continuous Optimization - Current Trends and Modern Applications (Hardcover, 2005 ed.)
V. Jeyakumar, Alexander M. Rubinov
R3,061 Discovery Miles 30 610 Ships in 10 - 15 working days

Continuous optimization is the study of problems in which we wish to opti mize (either maximize or minimize) a continuous function (usually of several variables) often subject to a collection of restrictions on these variables. It has its foundation in the development of calculus by Newton and Leibniz in the 17* DEGREES century. Nowadys, continuous optimization problems are widespread in the mathematical modelling of real world systems for a very broad range of applications. Solution methods for large multivariable constrained continuous optimiza tion problems using computers began with the work of Dantzig in the late 1940s on the simplex method for linear programming problems. Recent re search in continuous optimization has produced a variety of theoretical devel opments, solution methods and new areas of applications. It is impossible to give a full account of the current trends and modern applications of contin uous optimization. It is our intention to present a number of topics in order to show the spectrum of current research activities and the development of numerical methods and applications."

Lagrange-type Functions in Constrained Non-Convex Optimization (Paperback, Softcover reprint of the original 1st ed. 2003):... Lagrange-type Functions in Constrained Non-Convex Optimization (Paperback, Softcover reprint of the original 1st ed. 2003)
Alexander M. Rubinov, Xiao-qi Yang
R2,791 Discovery Miles 27 910 Ships in 10 - 15 working days

Lagrange and penalty function methods provide a powerful approach, both as a theoretical tool and a computational vehicle, for the study of constrained optimization problems. However, for a nonconvex constrained optimization problem, the classical Lagrange primal-dual method may fail to find a mini mum as a zero duality gap is not always guaranteed. A large penalty parameter is, in general, required for classical quadratic penalty functions in order that minima of penalty problems are a good approximation to those of the original constrained optimization problems. It is well-known that penaity functions with too large parameters cause an obstacle for numerical implementation. Thus the question arises how to generalize classical Lagrange and penalty functions, in order to obtain an appropriate scheme for reducing constrained optimiza tion problems to unconstrained ones that will be suitable for sufficiently broad classes of optimization problems from both the theoretical and computational viewpoints. Some approaches for such a scheme are studied in this book. One of them is as follows: an unconstrained problem is constructed, where the objective function is a convolution of the objective and constraint functions of the original problem. While a linear convolution leads to a classical Lagrange function, different kinds of nonlinear convolutions lead to interesting generalizations. We shall call functions that appear as a convolution of the objective function and the constraint functions, Lagrange-type functions.

Optimization and Related Topics (Paperback, Softcover reprint of the original 1st ed. 2001): Alexander M. Rubinov, Barney M.... Optimization and Related Topics (Paperback, Softcover reprint of the original 1st ed. 2001)
Alexander M. Rubinov, Barney M. Glover
R4,282 Discovery Miles 42 820 Ships in 10 - 15 working days

This volume contains, in part, a selection of papers presented at the sixth Australian Optimization Day Miniconference (Ballarat, 16 July 1999), and the Special Sessions on Nonlinear Dynamics and Optimization and Operations Re search - Methods and Applications, which were held in Melbourne, July 11-15 1999 as a part of the Joint Meeting of the American Mathematical Society and Australian Mathematical Society. The editors have strived to present both con tributed papers and survey style papers as a more interesting mix for readers. Some participants from the meetings mentioned above have responded to this approach by preparing survey and 'semi-survey' papers, based on presented lectures. Contributed paper, which contain new and interesting results, are also included. The fields of the presented papers are very large as demonstrated by the following selection of key words from selected papers in this volume: * optimal control, stochastic optimal control, MATLAB, economic models, implicit constraints, Bellman principle, Markov process, decision-making under uncertainty, risk aversion, dynamic programming, optimal value function. * emergent computation, complexity, traveling salesman problem, signal estimation, neural networks, time congestion, teletraffic. * gap functions, nonsmooth variational inequalities, derivative-free algo rithm, Newton's method. * auxiliary function, generalized penalty function, modified Lagrange func tion. * convexity, quasiconvexity, abstract convexity.

Continuous Optimization - Current Trends and Modern Applications (Paperback, Softcover reprint of hardcover 1st ed. 2005): V.... Continuous Optimization - Current Trends and Modern Applications (Paperback, Softcover reprint of hardcover 1st ed. 2005)
V. Jeyakumar, Alexander M. Rubinov
R2,839 Discovery Miles 28 390 Ships in 10 - 15 working days

Continuous optimization is the study of problems in which we wish to opti mize (either maximize or minimize) a continuous function (usually of several variables) often subject to a collection of restrictions on these variables. It has its foundation in the development of calculus by Newton and Leibniz in the 17* DEGREES century. Nowadys, continuous optimization problems are widespread in the mathematical modelling of real world systems for a very broad range of applications. Solution methods for large multivariable constrained continuous optimiza tion problems using computers began with the work of Dantzig in the late 1940s on the simplex method for linear programming problems. Recent re search in continuous optimization has produced a variety of theoretical devel opments, solution methods and new areas of applications. It is impossible to give a full account of the current trends and modern applications of contin uous optimization. It is our intention to present a number of topics in order to show the spectrum of current research activities and the development of numerical methods and applications."

Abstract Convexity and Global Optimization (Paperback, Softcover reprint of hardcover 1st ed. 2000): Alexander M. Rubinov Abstract Convexity and Global Optimization (Paperback, Softcover reprint of hardcover 1st ed. 2000)
Alexander M. Rubinov
R4,308 Discovery Miles 43 080 Ships in 10 - 15 working days

Special tools are required for examining and solving optimization problems. The main tools in the study of local optimization are classical calculus and its modern generalizions which form nonsmooth analysis. The gradient and various kinds of generalized derivatives allow us to ac complish a local approximation of a given function in a neighbourhood of a given point. This kind of approximation is very useful in the study of local extrema. However, local approximation alone cannot help to solve many problems of global optimization, so there is a clear need to develop special global tools for solving these problems. The simplest and most well-known area of global and simultaneously local optimization is convex programming. The fundamental tool in the study of convex optimization problems is the subgradient, which actu ally plays both a local and global role. First, a subgradient of a convex function f at a point x carries out a local approximation of f in a neigh bourhood of x. Second, the subgradient permits the construction of an affine function, which does not exceed f over the entire space and coincides with f at x. This affine function h is called a support func tion. Since f(y) ~ h(y) for ally, the second role is global. In contrast to a local approximation, the function h will be called a global affine support.

Generalized Convexity and Related Topics (Paperback, 2006 ed.): Igor V. Konnov, Dinh The Luc, Alexander M. Rubinov Generalized Convexity and Related Topics (Paperback, 2006 ed.)
Igor V. Konnov, Dinh The Luc, Alexander M. Rubinov
R2,844 Discovery Miles 28 440 Ships in 10 - 15 working days

In mathematics generalization is one of the main activities of researchers. It opens up new theoretical horizons and broadens the ?elds of applications. Intensive study of generalized convex objects began about three decades ago when the theory of convex analysis nearly reached its perfect stage of devel- ment with the pioneering contributions of Fenchel, Moreau, Rockafellar and others. The involvement of a number of scholars in the study of generalized convex functions and generalized monotone operators in recent years is due to the quest for more general techniques that are able to describe and treat models of the real world in which convexity and monotonicity are relaxed. Ideas and methods of generalized convexity are now within reach not only in mathematics, but also in economics, engineering, mechanics, ?nance and other applied sciences. This volume of referred papers, carefully selected from the contributions delivered at the 8th International Symposium on Generalized Convexity and Monotonicity (Varese, 4-8 July, 2005), o?ers a global picture of current trends of research in generalized convexity and generalized monotonicity. It begins withthreeinvitedlecturesbyKonnov,LevinandPardalosonnumericalvar- tionalanalysis,mathematicaleconomicsandinvexity,respectively.Thencome twenty four full length papers on new achievements in both the theory of the ?eld and its applications. The diapason of the topics tackled in these cont- butions is very large. It encompasses, in particular, variational inequalities, equilibrium problems, game theory, optimization, control, numerical me- ods in solving multiobjective optimization problems, consumer preferences, discrete convexity and many others.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Bio-Imaging and Visualization for…
Joao Manuel R.S. Tavares, Xiongbiao Luo, … Hardcover R3,088 R1,771 Discovery Miles 17 710
Molecular & Diagnostic Imaging in…
Heide Schatten Hardcover R2,795 Discovery Miles 27 950
Bioinformatics for Diagnosis, Prognosis…
Bairong Shen Hardcover R4,582 R3,297 Discovery Miles 32 970
Antibody-Drug Conjugates and…
Gail Lewis Phillips Hardcover R4,279 Discovery Miles 42 790
Proteostasis and Disease - From Basic…
Rosa Barrio, James D. Sutherland, … Hardcover R4,291 Discovery Miles 42 910
Ex Vivo Engineering of the Tumor…
Amir R Aref, David Barbie Hardcover R4,488 Discovery Miles 44 880
Non-coding RNAs at the Cross-Road of…
Paul Holvoet Hardcover R3,949 Discovery Miles 39 490
Design and Applications of Nanoparticles…
Jeff W.M. Bulte, Michel M. J. Modo Hardcover R5,710 R4,861 Discovery Miles 48 610
Advances in Cancer Research, Volume 71
George F. Vande Woude, George Klein Hardcover R4,076 Discovery Miles 40 760
Tumors of the Central Nervous System…
M. A. Hayat Hardcover R6,583 Discovery Miles 65 830

 

Partners