|
Showing 1 - 10 of
10 matches in All Departments
Lagrange and penalty function methods provide a powerful approach,
both as a theoretical tool and a computational vehicle, for the
study of constrained optimization problems. However, for a
nonconvex constrained optimization problem, the classical Lagrange
primal-dual method may fail to find a mini mum as a zero duality
gap is not always guaranteed. A large penalty parameter is, in
general, required for classical quadratic penalty functions in
order that minima of penalty problems are a good approximation to
those of the original constrained optimization problems. It is
well-known that penaity functions with too large parameters cause
an obstacle for numerical implementation. Thus the question arises
how to generalize classical Lagrange and penalty functions, in
order to obtain an appropriate scheme for reducing constrained
optimiza tion problems to unconstrained ones that will be suitable
for sufficiently broad classes of optimization problems from both
the theoretical and computational viewpoints. Some approaches for
such a scheme are studied in this book. One of them is as follows:
an unconstrained problem is constructed, where the objective
function is a convolution of the objective and constraint functions
of the original problem. While a linear convolution leads to a
classical Lagrange function, different kinds of nonlinear
convolutions lead to interesting generalizations. We shall call
functions that appear as a convolution of the objective function
and the constraint functions, Lagrange-type functions.
Continuous optimization is the study of problems in which we wish
to opti mize (either maximize or minimize) a continuous function
(usually of several variables) often subject to a collection of
restrictions on these variables. It has its foundation in the
development of calculus by Newton and Leibniz in the 17* DEGREES
century. Nowadys, continuous optimization problems are widespread
in the mathematical modelling of real world systems for a very
broad range of applications. Solution methods for large
multivariable constrained continuous optimiza tion problems using
computers began with the work of Dantzig in the late 1940s on the
simplex method for linear programming problems. Recent re search in
continuous optimization has produced a variety of theoretical devel
opments, solution methods and new areas of applications. It is
impossible to give a full account of the current trends and modern
applications of contin uous optimization. It is our intention to
present a number of topics in order to show the spectrum of current
research activities and the development of numerical methods and
applications."
This volume contains, in part, a selection of papers presented at
the sixth Australian Optimization Day Miniconference (Ballarat, 16
July 1999), and the Special Sessions on Nonlinear Dynamics and
Optimization and Operations Re search - Methods and Applications,
which were held in Melbourne, July 11-15 1999 as a part of the
Joint Meeting of the American Mathematical Society and Australian
Mathematical Society. The editors have strived to present both con
tributed papers and survey style papers as a more interesting mix
for readers. Some participants from the meetings mentioned above
have responded to this approach by preparing survey and
'semi-survey' papers, based on presented lectures. Contributed
paper, which contain new and interesting results, are also
included. The fields of the presented papers are very large as
demonstrated by the following selection of key words from selected
papers in this volume: * optimal control, stochastic optimal
control, MATLAB, economic models, implicit constraints, Bellman
principle, Markov process, decision-making under uncertainty, risk
aversion, dynamic programming, optimal value function. * emergent
computation, complexity, traveling salesman problem, signal
estimation, neural networks, time congestion, teletraffic. * gap
functions, nonsmooth variational inequalities, derivative-free algo
rithm, Newton's method. * auxiliary function, generalized penalty
function, modified Lagrange func tion. * convexity, quasiconvexity,
abstract convexity.
Special tools are required for examining and solving optimization
problems. The main tools in the study of local optimization are
classical calculus and its modern generalizions which form
nonsmooth analysis. The gradient and various kinds of generalized
derivatives allow us to ac complish a local approximation of a
given function in a neighbourhood of a given point. This kind of
approximation is very useful in the study of local extrema.
However, local approximation alone cannot help to solve many
problems of global optimization, so there is a clear need to
develop special global tools for solving these problems. The
simplest and most well-known area of global and simultaneously
local optimization is convex programming. The fundamental tool in
the study of convex optimization problems is the subgradient, which
actu ally plays both a local and global role. First, a subgradient
of a convex function f at a point x carries out a local
approximation of f in a neigh bourhood of x. Second, the
subgradient permits the construction of an affine function, which
does not exceed f over the entire space and coincides with f at x.
This affine function h is called a support func tion. Since f(y) ~
h(y) for ally, the second role is global. In contrast to a local
approximation, the function h will be called a global affine
support.
In mathematics generalization is one of the main activities of
researchers. It opens up new theoretical horizons and broadens the
?elds of applications. Intensive study of generalized convex
objects began about three decades ago when the theory of convex
analysis nearly reached its perfect stage of devel- ment with the
pioneering contributions of Fenchel, Moreau, Rockafellar and
others. The involvement of a number of scholars in the study of
generalized convex functions and generalized monotone operators in
recent years is due to the quest for more general techniques that
are able to describe and treat models of the real world in which
convexity and monotonicity are relaxed. Ideas and methods of
generalized convexity are now within reach not only in mathematics,
but also in economics, engineering, mechanics, ?nance and other
applied sciences. This volume of referred papers, carefully
selected from the contributions delivered at the 8th International
Symposium on Generalized Convexity and Monotonicity (Varese, 4-8
July, 2005), o?ers a global picture of current trends of research
in generalized convexity and generalized monotonicity. It begins
withthreeinvitedlecturesbyKonnov,LevinandPardalosonnumericalvar-
tionalanalysis,mathematicaleconomicsandinvexity,respectively.Thencome
twenty four full length papers on new achievements in both the
theory of the ?eld and its applications. The diapason of the topics
tackled in these cont- butions is very large. It encompasses, in
particular, variational inequalities, equilibrium problems, game
theory, optimization, control, numerical me- ods in solving
multiobjective optimization problems, consumer preferences,
discrete convexity and many others.
Lagrange and penalty function methods provide a powerful approach,
both as a theoretical tool and a computational vehicle, for the
study of constrained optimization problems. However, for a
nonconvex constrained optimization problem, the classical Lagrange
primal-dual method may fail to find a mini mum as a zero duality
gap is not always guaranteed. A large penalty parameter is, in
general, required for classical quadratic penalty functions in
order that minima of penalty problems are a good approximation to
those of the original constrained optimization problems. It is
well-known that penaity functions with too large parameters cause
an obstacle for numerical implementation. Thus the question arises
how to generalize classical Lagrange and penalty functions, in
order to obtain an appropriate scheme for reducing constrained
optimiza tion problems to unconstrained ones that will be suitable
for sufficiently broad classes of optimization problems from both
the theoretical and computational viewpoints. Some approaches for
such a scheme are studied in this book. One of them is as follows:
an unconstrained problem is constructed, where the objective
function is a convolution of the objective and constraint functions
of the original problem. While a linear convolution leads to a
classical Lagrange function, different kinds of nonlinear
convolutions lead to interesting generalizations. We shall call
functions that appear as a convolution of the objective function
and the constraint functions, Lagrange-type functions."
This volume contains, in part, a selection of papers presented at
the sixth Australian Optimization Day Miniconference (Ballarat, 16
July 1999), and the Special Sessions on Nonlinear Dynamics and
Optimization and Operations Re search - Methods and Applications,
which were held in Melbourne, July 11-15 1999 as a part of the
Joint Meeting of the American Mathematical Society and Australian
Mathematical Society. The editors have strived to present both con
tributed papers and survey style papers as a more interesting mix
for readers. Some participants from the meetings mentioned above
have responded to this approach by preparing survey and
'semi-survey' papers, based on presented lectures. Contributed
paper, which contain new and interesting results, are also
included. The fields of the presented papers are very large as
demonstrated by the following selection of key words from selected
papers in this volume: * optimal control, stochastic optimal
control, MATLAB, economic models, implicit constraints, Bellman
principle, Markov process, decision-making under uncertainty, risk
aversion, dynamic programming, optimal value function. * emergent
computation, complexity, traveling salesman problem, signal
estimation, neural networks, time congestion, teletraffic. * gap
functions, nonsmooth variational inequalities, derivative-free algo
rithm, Newton's method. * auxiliary function, generalized penalty
function, modified Lagrange func tion. * convexity, quasiconvexity,
abstract convexity.
Special tools are required for examining and solving optimization
problems. The main tools in the study of local optimization are
classical calculus and its modern generalizions which form
nonsmooth analysis. The gradient and various kinds of generalized
derivatives allow us to ac complish a local approximation of a
given function in a neighbourhood of a given point. This kind of
approximation is very useful in the study of local extrema.
However, local approximation alone cannot help to solve many
problems of global optimization, so there is a clear need to
develop special global tools for solving these problems. The
simplest and most well-known area of global and simultaneously
local optimization is convex programming. The fundamental tool in
the study of convex optimization problems is the subgradient, which
actu ally plays both a local and global role. First, a subgradient
of a convex function f at a point x carries out a local
approximation of f in a neigh bourhood of x. Second, the
subgradient permits the construction of an affine function, which
does not exceed f over the entire space and coincides with f at x.
This affine function h is called a support func tion. Since f(y) ~
h(y) for ally, the second role is global. In contrast to a local
approximation, the function h will be called a global affine
support.
2 Radiant sets 236 3 Co-radiant sets 239 4 Radiative and
co-radiative sets 241 5 Radiant sets with Lipschitz continuous
Minkowski gauges 245 6 Star-shaped sets and their kernels 249 7
Separation 251 8 Abstract convex star-shaped sets 255 References
260 11 DIFFERENCES OF CONVEX COMPACTA AND METRIC SPACES OF CON- 263
VEX COMPACTA WITH APPLICATIONS: A SURVEY A. M. Rubinov, A. A.
Vladimirov 1 Introduction 264 2 Preliminaries 264 3 Differences of
convex compact sets: general approach 266 4 Metric projections and
corresponding differences (one-dimensional case) 267 5 The
*-difference 269 6 The Demyanov difference 271 7 Geometric and
inductive definitions of the D-difference 273 8 Applications to DC
and quasidifferentiable functions 276 9 Differences of pairs of
set-valued mappings with applications to quasidiff- entiability 278
10 Applications to approximate subdifferentials 280 11 Applications
to the approximation of linear set-valued mappings 281 12 The
Demyanov metric 282 13 The Bartels-Pallaschke metric 284 14
Hierarchy of the three norms on Qn 285 15 Derivatives 287 16
Distances from convex polyhedra and convergence of convex polyhedra
289 17 Normality of convex sets 290 18 D-regular sets 291 19
Variable D-regular sets 292 20 Optimization 293 References 294 12
CONVEX APPROXIMATORS.
Continuous optimization is the study of problems in which we wish
to opti mize (either maximize or minimize) a continuous function
(usually of several variables) often subject to a collection of
restrictions on these variables. It has its foundation in the
development of calculus by Newton and Leibniz in the 17* DEGREES
century. Nowadys, continuous optimization problems are widespread
in the mathematical modelling of real world systems for a very
broad range of applications. Solution methods for large
multivariable constrained continuous optimiza tion problems using
computers began with the work of Dantzig in the late 1940s on the
simplex method for linear programming problems. Recent re search in
continuous optimization has produced a variety of theoretical devel
opments, solution methods and new areas of applications. It is
impossible to give a full account of the current trends and modern
applications of contin uous optimization. It is our intention to
present a number of topics in order to show the spectrum of current
research activities and the development of numerical methods and
applications."
|
|