|
Showing 1 - 8 of
8 matches in All Departments
Two approaches are known for solving large-scale unconstrained
optimization problems-the limited-memory quasi-Newton method
(truncated Newton method) and the conjugate gradient method. This
is the first book to detail conjugate gradient methods, showing
their properties and convergence characteristics as well as their
performance in solving large-scale unconstrained optimization
problems and applications. Comparisons to the limited-memory and
truncated Newton methods are also discussed. Topics studied in
detail include: linear conjugate gradient methods, standard
conjugate gradient methods, acceleration of conjugate gradient
methods, hybrid, modifications of the standard scheme, memoryless
BFGS preconditioned, and three-term. Other conjugate gradient
methods with clustering the eigenvalues or with the minimization of
the condition number of the iteration matrix, are also treated. For
each method, the convergence analysis, the computational
performances and the comparisons versus other conjugate gradient
methods are given. The theory behind the conjugate gradient
algorithms presented as a methodology is developed with a clear,
rigorous, and friendly exposition; the reader will gain an
understanding of their properties and their convergence and will
learn to develop and prove the convergence of his/her own methods.
Numerous numerical studies are supplied with comparisons and
comments on the behavior of conjugate gradient algorithms for
solving a collection of 800 unconstrained optimization problems of
different structures and complexities with the number of variables
in the range [1000,10000]. The book is addressed to all those
interested in developing and using new advanced techniques for
solving unconstrained optimization complex problems. Mathematical
programming researchers, theoreticians and practitioners in
operations research, practitioners in engineering and industry
researchers, as well as graduate students in mathematics, Ph.D. and
master students in mathematical programming, will find plenty of
information and practical applications for solving large-scale
unconstrained optimization problems and applications by conjugate
gradient methods.
Two approaches are known for solving large-scale unconstrained
optimization problems-the limited-memory quasi-Newton method
(truncated Newton method) and the conjugate gradient method. This
is the first book to detail conjugate gradient methods, showing
their properties and convergence characteristics as well as their
performance in solving large-scale unconstrained optimization
problems and applications. Comparisons to the limited-memory and
truncated Newton methods are also discussed. Topics studied in
detail include: linear conjugate gradient methods, standard
conjugate gradient methods, acceleration of conjugate gradient
methods, hybrid, modifications of the standard scheme, memoryless
BFGS preconditioned, and three-term. Other conjugate gradient
methods with clustering the eigenvalues or with the minimization of
the condition number of the iteration matrix, are also treated. For
each method, the convergence analysis, the computational
performances and the comparisons versus other conjugate gradient
methods are given. The theory behind the conjugate gradient
algorithms presented as a methodology is developed with a clear,
rigorous, and friendly exposition; the reader will gain an
understanding of their properties and their convergence and will
learn to develop and prove the convergence of his/her own methods.
Numerous numerical studies are supplied with comparisons and
comments on the behavior of conjugate gradient algorithms for
solving a collection of 800 unconstrained optimization problems of
different structures and complexities with the number of variables
in the range [1000,10000]. The book is addressed to all those
interested in developing and using new advanced techniques for
solving unconstrained optimization complex problems. Mathematical
programming researchers, theoreticians and practitioners in
operations research, practitioners in engineering and industry
researchers, as well as graduate students in mathematics, Ph.D. and
master students in mathematical programming, will find plenty of
information and practical applications for solving large-scale
unconstrained optimization problems and applications by conjugate
gradient methods.
This book presents the theoretical details and computational
performances of algorithms used for solving continuous nonlinear
optimization applications imbedded in GAMS. Aimed toward scientists
and graduate students who utilize optimization methods to model and
solve problems in mathematical programming, operations research,
business, engineering, and industry, this book enables readers with
a background in nonlinear optimization and linear algebra to use
GAMS technology to understand and utilize its important
capabilities to optimize algorithms for modeling and solving
complex, large-scale, continuous nonlinear optimization problems or
applications. Beginning with an overview of constrained nonlinear
optimization methods, this book moves on to illustrate key aspects
of mathematical modeling through modeling technologies based on
algebraically oriented modeling languages. Next, the main feature
of GAMS, an algebraically oriented language that allows for
high-level algebraic representation of mathematical optimization
models, is introduced to model and solve continuous nonlinear
optimization applications. More than 15 real nonlinear optimization
applications in algebraic and GAMS representation are presented
which are used to illustrate the performances of the algorithms
described in this book. Theoretical and computational results,
methods, and techniques effective for solving nonlinear
optimization problems, are detailed through the algorithms MINOS,
KNITRO, CONOPT, SNOPT and IPOPT which work in GAMS technology.
This book presents the theoretical details and computational
performances of algorithms used for solving continuous nonlinear
optimization applications imbedded in GAMS. Aimed toward scientists
and graduate students who utilize optimization methods to model and
solve problems in mathematical programming, operations research,
business, engineering, and industry, this book enables readers with
a background in nonlinear optimization and linear algebra to use
GAMS technology to understand and utilize its important
capabilities to optimize algorithms for modeling and solving
complex, large-scale, continuous nonlinear optimization problems or
applications. Beginning with an overview of constrained nonlinear
optimization methods, this book moves on to illustrate key aspects
of mathematical modeling through modeling technologies based on
algebraically oriented modeling languages. Next, the main feature
of GAMS, an algebraically oriented language that allows for
high-level algebraic representation of mathematical optimization
models, is introduced to model and solve continuous nonlinear
optimization applications. More than 15 real nonlinear optimization
applications in algebraic and GAMS representation are presented
which are used to illustrate the performances of the algorithms
described in this book. Theoretical and computational results,
methods, and techniques effective for solving nonlinear
optimization problems, are detailed through the algorithms MINOS,
KNITRO, CONOPT, SNOPT and IPOPT which work in GAMS technology.
Here is a collection of nonlinear optimization applications from
the real world, expressed in the General Algebraic Modeling System
(GAMS). The concepts are presented so that the reader can quickly
modify and update them to represent real-world situations.
Here is a collection of nonlinear optimization applications from
the real world, expressed in the General Algebraic Modeling System
(GAMS). The concepts are presented so that the reader can quickly
modify and update them to represent real-world situations.
This book includes a thorough theoretical and computational
analysis of unconstrained and constrained optimization algorithms
and combines and integrates the most recent techniques and advanced
computational linear algebra methods. Nonlinear optimization
methods and techniques have reached their maturity and an abundance
of optimization algorithms are available for which both the
convergence properties and the numerical performances are known.
This clear, friendly, and rigorous exposition discusses the theory
behind the nonlinear optimization algorithms for understanding
their properties and their convergence, enabling the reader to
prove the convergence of his/her own algorithms. It covers cases
and computational performances of the most known modern nonlinear
optimization algorithms that solve collections of unconstrained and
constrained optimization test problems with different structures,
complexities, as well as those with large-scale real applications.
The book is addressed to all those interested in developing and
using new advanced techniques for solving large-scale unconstrained
or constrained complex optimization problems. Mathematical
programming researchers, theoreticians and practitioners in
operations research, practitioners in engineering and industry
researchers, as well as graduate students in mathematics, Ph.D. and
master in mathematical programming will find plenty of recent
information and practical approaches for solving real large-scale
optimization problems and applications.
The book is intended for graduate students and researchers in
mathematics, computer science, and operational research. The book
presents a new derivative-free optimization method/algorithm based
on randomly generated trial points in specified domains and where
the best ones are selected at each iteration by using a number of
rules. This method is different from many other well established
methods presented in the literature and proves to be competitive
for solving many unconstrained optimization problems with different
structures and complexities, with a relative large number of
variables. Intensive numerical experiments with 140 unconstrained
optimization problems, with up to 500 variables, have shown that
this approach is efficient and robust. Structured into 4 chapters,
Chapter 1 is introductory. Chapter 2 is dedicated to presenting a
two level derivative-free random search method for unconstrained
optimization. It is assumed that the minimizing function is
continuous, lower bounded and its minimum value is known. Chapter 3
proves the convergence of the algorithm. In Chapter 4, the
numerical performances of the algorithm are shown for solving 140
unconstrained optimization problems, out of which 16 are real
applications. This shows that the optimization process has two
phases: the reduction phase and the stalling one. Finally, the
performances of the algorithm for solving a number of 30
large-scale unconstrained optimization problems up to 500 variables
are presented. These numerical results show that this approach
based on the two level random search method for unconstrained
optimization is able to solve a large diversity of problems with
different structures and complexities. There are a number of open
problems which refer to the following aspects: the selection of the
number of trial or the number of the local trial points, the
selection of the bounds of the domains where the trial points and
the local trial points are randomly generated and a criterion for
initiating the line search.
|
|