0
Your cart

Your cart is empty

Browse All Departments
  • All Departments
Price
  • R2,500 - R5,000 (10)
  • -
Status
Brand

Showing 1 - 10 of 10 matches in All Departments

Lagrange-type Functions in Constrained Non-Convex Optimization (Paperback, Softcover reprint of the original 1st ed. 2003):... Lagrange-type Functions in Constrained Non-Convex Optimization (Paperback, Softcover reprint of the original 1st ed. 2003)
Alexander M. Rubinov, Xiao-qi Yang
R2,960 Discovery Miles 29 600 Ships in 10 - 15 working days

Lagrange and penalty function methods provide a powerful approach, both as a theoretical tool and a computational vehicle, for the study of constrained optimization problems. However, for a nonconvex constrained optimization problem, the classical Lagrange primal-dual method may fail to find a mini mum as a zero duality gap is not always guaranteed. A large penalty parameter is, in general, required for classical quadratic penalty functions in order that minima of penalty problems are a good approximation to those of the original constrained optimization problems. It is well-known that penaity functions with too large parameters cause an obstacle for numerical implementation. Thus the question arises how to generalize classical Lagrange and penalty functions, in order to obtain an appropriate scheme for reducing constrained optimiza tion problems to unconstrained ones that will be suitable for sufficiently broad classes of optimization problems from both the theoretical and computational viewpoints. Some approaches for such a scheme are studied in this book. One of them is as follows: an unconstrained problem is constructed, where the objective function is a convolution of the objective and constraint functions of the original problem. While a linear convolution leads to a classical Lagrange function, different kinds of nonlinear convolutions lead to interesting generalizations. We shall call functions that appear as a convolution of the objective function and the constraint functions, Lagrange-type functions.

Optimization and Related Topics (Paperback, Softcover reprint of the original 1st ed. 2001): Alexander M. Rubinov, Barney M.... Optimization and Related Topics (Paperback, Softcover reprint of the original 1st ed. 2001)
Alexander M. Rubinov, Barney M. Glover
R4,543 Discovery Miles 45 430 Ships in 10 - 15 working days

This volume contains, in part, a selection of papers presented at the sixth Australian Optimization Day Miniconference (Ballarat, 16 July 1999), and the Special Sessions on Nonlinear Dynamics and Optimization and Operations Re search - Methods and Applications, which were held in Melbourne, July 11-15 1999 as a part of the Joint Meeting of the American Mathematical Society and Australian Mathematical Society. The editors have strived to present both con tributed papers and survey style papers as a more interesting mix for readers. Some participants from the meetings mentioned above have responded to this approach by preparing survey and 'semi-survey' papers, based on presented lectures. Contributed paper, which contain new and interesting results, are also included. The fields of the presented papers are very large as demonstrated by the following selection of key words from selected papers in this volume: * optimal control, stochastic optimal control, MATLAB, economic models, implicit constraints, Bellman principle, Markov process, decision-making under uncertainty, risk aversion, dynamic programming, optimal value function. * emergent computation, complexity, traveling salesman problem, signal estimation, neural networks, time congestion, teletraffic. * gap functions, nonsmooth variational inequalities, derivative-free algo rithm, Newton's method. * auxiliary function, generalized penalty function, modified Lagrange func tion. * convexity, quasiconvexity, abstract convexity.

Continuous Optimization - Current Trends and Modern Applications (Paperback, Softcover reprint of hardcover 1st ed. 2005): V.... Continuous Optimization - Current Trends and Modern Applications (Paperback, Softcover reprint of hardcover 1st ed. 2005)
V. Jeyakumar, Alexander M. Rubinov
R3,012 Discovery Miles 30 120 Ships in 10 - 15 working days

Continuous optimization is the study of problems in which we wish to opti mize (either maximize or minimize) a continuous function (usually of several variables) often subject to a collection of restrictions on these variables. It has its foundation in the development of calculus by Newton and Leibniz in the 17* DEGREES century. Nowadys, continuous optimization problems are widespread in the mathematical modelling of real world systems for a very broad range of applications. Solution methods for large multivariable constrained continuous optimiza tion problems using computers began with the work of Dantzig in the late 1940s on the simplex method for linear programming problems. Recent re search in continuous optimization has produced a variety of theoretical devel opments, solution methods and new areas of applications. It is impossible to give a full account of the current trends and modern applications of contin uous optimization. It is our intention to present a number of topics in order to show the spectrum of current research activities and the development of numerical methods and applications."

Abstract Convexity and Global Optimization (Paperback, Softcover reprint of hardcover 1st ed. 2000): Alexander M. Rubinov Abstract Convexity and Global Optimization (Paperback, Softcover reprint of hardcover 1st ed. 2000)
Alexander M. Rubinov
R4,571 Discovery Miles 45 710 Ships in 10 - 15 working days

Special tools are required for examining and solving optimization problems. The main tools in the study of local optimization are classical calculus and its modern generalizions which form nonsmooth analysis. The gradient and various kinds of generalized derivatives allow us to ac complish a local approximation of a given function in a neighbourhood of a given point. This kind of approximation is very useful in the study of local extrema. However, local approximation alone cannot help to solve many problems of global optimization, so there is a clear need to develop special global tools for solving these problems. The simplest and most well-known area of global and simultaneously local optimization is convex programming. The fundamental tool in the study of convex optimization problems is the subgradient, which actu ally plays both a local and global role. First, a subgradient of a convex function f at a point x carries out a local approximation of f in a neigh bourhood of x. Second, the subgradient permits the construction of an affine function, which does not exceed f over the entire space and coincides with f at x. This affine function h is called a support func tion. Since f(y) ~ h(y) for ally, the second role is global. In contrast to a local approximation, the function h will be called a global affine support.

Generalized Convexity and Related Topics (Paperback, 2006 ed.): Igor V. Konnov, Dinh The Luc, Alexander M. Rubinov Generalized Convexity and Related Topics (Paperback, 2006 ed.)
Igor V. Konnov, Dinh The Luc, Alexander M. Rubinov
R3,017 Discovery Miles 30 170 Ships in 10 - 15 working days

In mathematics generalization is one of the main activities of researchers. It opens up new theoretical horizons and broadens the ?elds of applications. Intensive study of generalized convex objects began about three decades ago when the theory of convex analysis nearly reached its perfect stage of devel- ment with the pioneering contributions of Fenchel, Moreau, Rockafellar and others. The involvement of a number of scholars in the study of generalized convex functions and generalized monotone operators in recent years is due to the quest for more general techniques that are able to describe and treat models of the real world in which convexity and monotonicity are relaxed. Ideas and methods of generalized convexity are now within reach not only in mathematics, but also in economics, engineering, mechanics, ?nance and other applied sciences. This volume of referred papers, carefully selected from the contributions delivered at the 8th International Symposium on Generalized Convexity and Monotonicity (Varese, 4-8 July, 2005), o?ers a global picture of current trends of research in generalized convexity and generalized monotonicity. It begins withthreeinvitedlecturesbyKonnov,LevinandPardalosonnumericalvar- tionalanalysis,mathematicaleconomicsandinvexity,respectively.Thencome twenty four full length papers on new achievements in both the theory of the ?eld and its applications. The diapason of the topics tackled in these cont- butions is very large. It encompasses, in particular, variational inequalities, equilibrium problems, game theory, optimization, control, numerical me- ods in solving multiobjective optimization problems, consumer preferences, discrete convexity and many others.

Lagrange-type Functions in Constrained Non-Convex Optimization (Hardcover, 2003 ed.): Alexander M. Rubinov, Xiao-qi Yang Lagrange-type Functions in Constrained Non-Convex Optimization (Hardcover, 2003 ed.)
Alexander M. Rubinov, Xiao-qi Yang
R3,143 Discovery Miles 31 430 Ships in 10 - 15 working days

Lagrange and penalty function methods provide a powerful approach, both as a theoretical tool and a computational vehicle, for the study of constrained optimization problems. However, for a nonconvex constrained optimization problem, the classical Lagrange primal-dual method may fail to find a mini mum as a zero duality gap is not always guaranteed. A large penalty parameter is, in general, required for classical quadratic penalty functions in order that minima of penalty problems are a good approximation to those of the original constrained optimization problems. It is well-known that penaity functions with too large parameters cause an obstacle for numerical implementation. Thus the question arises how to generalize classical Lagrange and penalty functions, in order to obtain an appropriate scheme for reducing constrained optimiza tion problems to unconstrained ones that will be suitable for sufficiently broad classes of optimization problems from both the theoretical and computational viewpoints. Some approaches for such a scheme are studied in this book. One of them is as follows: an unconstrained problem is constructed, where the objective function is a convolution of the objective and constraint functions of the original problem. While a linear convolution leads to a classical Lagrange function, different kinds of nonlinear convolutions lead to interesting generalizations. We shall call functions that appear as a convolution of the objective function and the constraint functions, Lagrange-type functions."

Optimization and Related Topics (Hardcover, 2001 ed.): Alexander M. Rubinov, Barney M. Glover Optimization and Related Topics (Hardcover, 2001 ed.)
Alexander M. Rubinov, Barney M. Glover
R4,785 Discovery Miles 47 850 Ships in 10 - 15 working days

This volume contains, in part, a selection of papers presented at the sixth Australian Optimization Day Miniconference (Ballarat, 16 July 1999), and the Special Sessions on Nonlinear Dynamics and Optimization and Operations Re search - Methods and Applications, which were held in Melbourne, July 11-15 1999 as a part of the Joint Meeting of the American Mathematical Society and Australian Mathematical Society. The editors have strived to present both con tributed papers and survey style papers as a more interesting mix for readers. Some participants from the meetings mentioned above have responded to this approach by preparing survey and 'semi-survey' papers, based on presented lectures. Contributed paper, which contain new and interesting results, are also included. The fields of the presented papers are very large as demonstrated by the following selection of key words from selected papers in this volume: * optimal control, stochastic optimal control, MATLAB, economic models, implicit constraints, Bellman principle, Markov process, decision-making under uncertainty, risk aversion, dynamic programming, optimal value function. * emergent computation, complexity, traveling salesman problem, signal estimation, neural networks, time congestion, teletraffic. * gap functions, nonsmooth variational inequalities, derivative-free algo rithm, Newton's method. * auxiliary function, generalized penalty function, modified Lagrange func tion. * convexity, quasiconvexity, abstract convexity.

Quasidifferentiability and Related Topics (Hardcover, 2000 ed.): Vladimir F. Dem'yanov, Alexander M. Rubinov Quasidifferentiability and Related Topics (Hardcover, 2000 ed.)
Vladimir F. Dem'yanov, Alexander M. Rubinov
R4,747 Discovery Miles 47 470 Ships in 10 - 15 working days

2 Radiant sets 236 3 Co-radiant sets 239 4 Radiative and co-radiative sets 241 5 Radiant sets with Lipschitz continuous Minkowski gauges 245 6 Star-shaped sets and their kernels 249 7 Separation 251 8 Abstract convex star-shaped sets 255 References 260 11 DIFFERENCES OF CONVEX COMPACTA AND METRIC SPACES OF CON- 263 VEX COMPACTA WITH APPLICATIONS: A SURVEY A. M. Rubinov, A. A. Vladimirov 1 Introduction 264 2 Preliminaries 264 3 Differences of convex compact sets: general approach 266 4 Metric projections and corresponding differences (one-dimensional case) 267 5 The *-difference 269 6 The Demyanov difference 271 7 Geometric and inductive definitions of the D-difference 273 8 Applications to DC and quasidifferentiable functions 276 9 Differences of pairs of set-valued mappings with applications to quasidiff- entiability 278 10 Applications to approximate subdifferentials 280 11 Applications to the approximation of linear set-valued mappings 281 12 The Demyanov metric 282 13 The Bartels-Pallaschke metric 284 14 Hierarchy of the three norms on Qn 285 15 Derivatives 287 16 Distances from convex polyhedra and convergence of convex polyhedra 289 17 Normality of convex sets 290 18 D-regular sets 291 19 Variable D-regular sets 292 20 Optimization 293 References 294 12 CONVEX APPROXIMATORS.

Abstract Convexity and Global Optimization (Hardcover, 2000 ed.): Alexander M. Rubinov Abstract Convexity and Global Optimization (Hardcover, 2000 ed.)
Alexander M. Rubinov
R4,809 Discovery Miles 48 090 Ships in 10 - 15 working days

Special tools are required for examining and solving optimization problems. The main tools in the study of local optimization are classical calculus and its modern generalizions which form nonsmooth analysis. The gradient and various kinds of generalized derivatives allow us to ac complish a local approximation of a given function in a neighbourhood of a given point. This kind of approximation is very useful in the study of local extrema. However, local approximation alone cannot help to solve many problems of global optimization, so there is a clear need to develop special global tools for solving these problems. The simplest and most well-known area of global and simultaneously local optimization is convex programming. The fundamental tool in the study of convex optimization problems is the subgradient, which actu ally plays both a local and global role. First, a subgradient of a convex function f at a point x carries out a local approximation of f in a neigh bourhood of x. Second, the subgradient permits the construction of an affine function, which does not exceed f over the entire space and coincides with f at x. This affine function h is called a support func tion. Since f(y) ~ h(y) for ally, the second role is global. In contrast to a local approximation, the function h will be called a global affine support.

Continuous Optimization - Current Trends and Modern Applications (Hardcover, 2005 ed.): V. Jeyakumar, Alexander M. Rubinov Continuous Optimization - Current Trends and Modern Applications (Hardcover, 2005 ed.)
V. Jeyakumar, Alexander M. Rubinov
R3,251 Discovery Miles 32 510 Ships in 10 - 15 working days

Continuous optimization is the study of problems in which we wish to opti mize (either maximize or minimize) a continuous function (usually of several variables) often subject to a collection of restrictions on these variables. It has its foundation in the development of calculus by Newton and Leibniz in the 17* DEGREES century. Nowadys, continuous optimization problems are widespread in the mathematical modelling of real world systems for a very broad range of applications. Solution methods for large multivariable constrained continuous optimiza tion problems using computers began with the work of Dantzig in the late 1940s on the simplex method for linear programming problems. Recent re search in continuous optimization has produced a variety of theoretical devel opments, solution methods and new areas of applications. It is impossible to give a full account of the current trends and modern applications of contin uous optimization. It is our intention to present a number of topics in order to show the spectrum of current research activities and the development of numerical methods and applications."

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Loot
Nadine Gordimer Paperback  (2)
R398 R330 Discovery Miles 3 300
TravelQuip Travel Toiletry Bag (Polka)
R118 Discovery Miles 1 180
Multi-Functional Bamboo Standing Laptop…
R1,399 R739 Discovery Miles 7 390
Loot
Nadine Gordimer Paperback  (2)
R398 R330 Discovery Miles 3 300
Vitatech Weight Loss Support (30…
R79 Discovery Miles 790
Sterile Wound Dressing
R5 Discovery Miles 50
Loot
Nadine Gordimer Paperback  (2)
R398 R330 Discovery Miles 3 300
Dunlop Pro High Altitude Squash Ball…
R180 R155 Discovery Miles 1 550
The Creator
John David Washington, Gemma Chan, … DVD R624 R299 Discovery Miles 2 990
Call The Midwife - Season 10
Jenny Agutter, Linda Bassett, … DVD R209 Discovery Miles 2 090

 

Partners