Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Science & Mathematics > Mathematics > Optimization > Linear programming
Linear Programming (LP) is perhaps the most frequently used
optimization technique. One of the reasons for its wide use is that
very powerful solution algorithms exist for linear optimization.
Computer programs based on either the simplex or interior point
methods are capable of solving very large-scale problems with high
reliability and within reasonable time. Model builders are aware of
this and often try to formulate real-life problems within this
framework to ensure they can be solved efficiently. It is also true
that many real-life optimization problems can be formulated as
truly linear models and also many others can well be approximated
by linearization. The two main methods for solving LP problems are
the variants of the simplex method and the interior point methods
(IPMs). It turns out that both variants have their role in solving
different problems. It has been recognized that, since the
introduction of the IPMs, the efficiency of simplex based solvers
has increased by two orders of magnitude. This increased efficiency
can be attributed to the following: (1) theoretical developments in
the underlying algorithms, (2) inclusion of results of computer
science, (3) using the principles of software engineering, and (4)
taking into account the state-of-the-art in computer technology.
The subject of this book is the reasoning under uncertainty based on sta tistical evidence, where the word reasoning is taken to mean searching for arguments in favor or against particular hypotheses of interest. The kind of reasoning we are using is composed of two aspects. The first one is inspired from classical reasoning in formal logic, where deductions are made from a knowledge base of observed facts and formulas representing the domain spe cific knowledge. In this book, the facts are the statistical observations and the general knowledge is represented by an instance of a special kind of sta tistical models called functional models. The second aspect deals with the uncertainty under which the formal reasoning takes place. For this aspect, the theory of hints [27] is the appropriate tool. Basically, we assume that some uncertain perturbation takes a specific value and then logically eval uate the consequences of this assumption. The original uncertainty about the perturbation is then transferred to the consequences of the assumption. This kind of reasoning is called assumption-based reasoning. Before going into more details about the content of this book, it might be interesting to look briefly at the roots and origins of assumption-based reasoning in the statistical context. In 1930, R. A. Fisher [17] defined the notion of fiducial distribution as the result of a new form of argument, as opposed to the result of the older Bayesian argument.
Potential Function Methods For Approximately Solving Linear Programming Problems breaks new ground in linear programming theory. The book draws on the research developments in three broad areas: linear and integer programming, numerical analysis, and the computational architectures which enable speedy, high-level algorithm design. During the last ten years, a new body of research within the field of optimization research has emerged, which seeks to develop good approximation algorithms for classes of linear programming problems. This work both has roots in fundamental areas of mathematical programming and is also framed in the context of the modern theory of algorithms. The result of this work, in which Daniel Bienstock has been very much involved, has been a family of algorithms with solid theoretical foundations and with growing experimental success. This book will examine these algorithms, starting with some of the very earliest examples, and through the latest theoretical and computational developments.
This monograph deals with problems of dynamical reconstruction of unknown variable characteristics (distributed or boundary disturbances, coefficients ofoperator etc.) for various classes of systems with distributed parameters (parabolic and hyperbolic equations, evolutionary variational inequalities etc.).
Along with the traditional material concerning linear programming (the simplex method, the theory of duality, the dual simplex method), In-Depth Analysis of Linear Programming contains new results of research carried out by the authors. For the first time, the criteria of stability (in the geometrical and algebraic forms) of the general linear programming problem are formulated and proved. New regularization methods based on the idea of extension of an admissible set are proposed for solving unstable (ill-posed) linear programming problems. In contrast to the well-known regularization methods, in the methods proposed in this book the initial unstable problem is replaced by a new stable auxiliary problem. This is also a linear programming problem, which can be solved by standard finite methods. In addition, the authors indicate the conditions imposed on the parameters of the auxiliary problem which guarantee its stability, and this circumstance advantageously distinguishes the regularization methods proposed in this book from the existing methods. In these existing methods, the stability of the auxiliary problem is usually only presupposed but is not explicitly investigated. In this book, the traditional material contained in the first three chapters is expounded in much simpler terms than in the majority of books on linear programming, which makes it accessible to beginners as well as those more familiar with the area.
This book introduces several topics related to linear model theory: multivariate linear models, discriminant analysis, principal components, factor analysis, time series in both the frequency and time domains, and spatial data analysis. The second edition adds new material on nonparametric regression, response surface maximization, and longitudinal models. The book provides a unified approach to these disparate subject and serves as a self-contained companion volume to the author's Plane Answers to Complex Questions: The Theory of Linear Models. Ronald Christensen is Professor of Statistics at the University of New Mexico. He is well known for his work on the theory and application of linear models having linear structure. He is the author of numerous technical articles and several books and he is a Fellow of the American Statistical Association and the Institute of Mathematical Statistics. Also Available: Christensen, Ronald. Plane Answers to Complex Questions: The Theory of Linear Models, Second Edition (1996). New York: Springer-Verlag New York, Inc. Christensen, Ronald. Log-Linear Models and Logistic Regression, Second Edition (1997). New York: Springer-Verlag New York, Inc.
This book offers a comprehensive treatment of the exercises and case studies as well as summaries of the chapters of the book "Linear Optimization and Extensions" by Manfred Padberg. It covers the areas of linear programming and the optimization of linear functions over polyhedra in finite dimensional Euclidean vector spaces.Here are the main topics treated in the book: Simplex algorithms and their derivatives including the duality theory of linear programming. Polyhedral theory, pointwise and linear descriptions of polyhedra, double description algorithms, Gaussian elimination with and without division, the complexity of simplex steps. Projective algorithms, the geometry of projective algorithms, Newtonian barrier methods. Ellipsoids algorithms in perfect and in finite precision arithmetic, the equivalence of linear optimization and polyhedral separation. The foundations of mixed-integer programming and combinatorial optimization.
In this book, the author considers separable programming and, in particular, one of its important cases - convex separable programming. Some general results are presented, techniques of approximating the separable problem by linear programming and dynamic programming are considered. Convex separable programs subject to inequality/ equality constraint(s) and bounds on variables are also studied and iterative algorithms of polynomial complexity are proposed. As an application, these algorithms are used in the implementation of stochastic quasigradient methods to some separable stochastic programs. Numerical approximation with respect to I1 and I4 norms, as a convex separable nonsmooth unconstrained minimization problem, is considered as well. Audience: Advanced undergraduate and graduate students, mathematical programming/ operations research specialists.
The articles in this proceedings volume reflect the current trends in the theory of approximation, optimization and mathematical economics, and include numerous applications. The book will be of interest to researchers and graduate students involved in functional analysis, approximation theory, mathematical programming and optimization, game theory, mathematical finance and economics.
Complementarity theory is a new domain in applied mathematics and is concerned with the study of complementarity problems. These problems represent a wide class of mathematical models related to optimization, game theory, economic engineering, mechanics, fluid mechanics, stochastic optimal control etc. The book is dedicated to the study of nonlinear complementarity problems by topological methods. Audience: Mathematicians, engineers, economists, specialists working in operations research and anybody interested in applied mathematics or in mathematical modeling.
The 9th Belgian-French-German Conference on Optimization has been held in Namur (Belgium) on September 7-11, 1998. This volume is a collection of papers presented at this Conference. Originally, this Conference was a French-German Conference but this year, in accordance with the organizers' wishes, a third country, Belgium, has joined the founding members of the Conference. Hence the name: Belgian French-German Conference on Optimization. Since the very beginning, the purpose of these Conferences has been to bring together researchers working in the area of Optimization and partic ularly to encourage young researchers to present their work. Most of the participants come from the organizing countries. However the general ten dancy is to invite outside researchers to attend the meeting. So this year, among the 101 participants at this Conference, twenty researchers came from other countries. The general theme of the Conference is everything that concerns the area of Optimization without specification of particular topics. So theoretical as pects of Optimization, in addition to applications and algorithms of Opti mization, will be developed. However, and this point was very important for the organizers, the Conference must retain its convivial character. No more than two parallel sessions are organized. This would allow useful contacts between researchers to be promoted. The editors express their sincere thanks to all those who took part in this Conference. Their invaluable discussions have made this volume possible."
This monograph presents a collection of results, observations, and examples related to dynamical systems described by linear and nonlinear ordinary differential and difference equations. In particular, dynamical systems that are susceptible to analysis by the Liapunov approach are considered. The naive observation that certain "diagonal-type" Liapunov functions are ubiquitous in the literature attracted the attention of the authors and led to some natural questions. Why does this happen so often? What are the spe cial virtues of these functions in this context? Do they occur so frequently merely because they belong to the simplest class of Liapunov functions and are thus more convenient, or are there any more specific reasons? This monograph constitutes the authors' synthesis of the work on this subject that has been jointly developed by them, among others, producing and compiling results, properties, and examples for many years, aiming to answer these questions and also to formalize some of the folklore or "cul ture" that has grown around diagonal stability and diagonal-type Liapunov functions. A natural answer to these questions would be that the use of diagonal type Liapunov functions is frequent because of their simplicity within the class of all possible Liapunov functions. This monograph shows that, although this obvious interpretation is often adequate, there are many in stances in which the Liapunov approach is best taken advantage of using diagonal-type Liapunov functions. In fact, they yield necessary and suffi cient stability conditions for some classes of nonlinear dynamical systems."
For a long time the techniques of solving linear optimization (LP) problems improved only marginally. Fifteen years ago, however, a revolutionary discovery changed everything. A new golden age' for optimization started, which is continuing up to the current time. What is the cause of the excitement? Techniques of linear programming formed previously an isolated body of knowledge. Then suddenly a tunnel was built linking it with a rich and promising land, part of which was already cultivated, part of which was completely unexplored. These revolutionary new techniques are now applied to solve conic linear problems. This makes it possible to model and solve large classes of essentially nonlinear optimization problems as efficiently as LP problems. This volume gives an overview of the latest developments of such High Performance Optimization Techniques'. The first part is a thorough treatment of interior point methods for semidefinite programming problems. The second part reviews today's most exciting research topics and results in the area of convex optimization. Audience: This volume is for graduate students and researchers who are interested in modern optimization techniques.
This book offers a comprehensive treatment of linear programming as well as of the optimization of linear functions over polyhedra in finite dimensional Euclidean vector spaces. An introduction surveying fifty years of linear optimization is given. The book can serve both as a graduate textbook for linear programming and as a text for advanced topics classes or seminars. Exercises as well as several case studies are included. The book is based on the author's long term experience in teaching and research. For his research work he has received, among other honors, the 1983 Lanchester Prize of the Operations Research Society of America, the 1985 Dantzig Prize of the Mathematical Programming Society and the Society for Industrial Applied Mathematics and a 1989 Alexander-von-Humboldt Senior U.S. Scientist Research Award.
Mathematical elegance is a constant theme in this treatment of linear programming and matrix games. Condensed tableau, minimal in size and notation, are employed for the simplex algorithm. In the context of these tableau the beautiful termination theorem of R.G. Bland is proven more simply than heretofore, and the important duality theorem becomes almost obvious. Examples and extensive discussions throughout the book provide insight into definitions, theorems, and applications. There is considerable informal discussion on how best to play matrix games. The book is designed for a one-semester undergraduate course. Readers will need a degree of mathematical sophistication and general tools such as sets, functions, and summation notation. No single college course is a prerequisite, but most students will do better with some prior college mathematics. This thorough introduction to linear programming and game theory will impart a deep understanding of the material and also increase the student's mathematical maturity.
The first comprehensive account of the theory of mass transportation problems and its applications. In Volume I, the authors systematically develop the theory with emphasis on the Monge-Kantorovich mass transportation and the Kantorovich-Rubinstein mass transshipment problems. They then discuss a variety of different approaches towards solving these problems and exploit the rich interrelations to several mathematical sciences - from functional analysis to probability theory and mathematical economics. The second volume is devoted to applications of the above problems to topics in applied probability, theory of moments and distributions with given marginals, queuing theory, risk theory of probability metrics and its applications to various fields, among them general limit theorems for Gaussian and non-Gaussian limiting laws, stochastic differential equations and algorithms, and rounding problems. Useful to graduates and researchers in theoretical and applied probability, operations research, computer science, and mathematical economics, the prerequisites for this book are graduate level probability theory and real and functional analysis.
This book focuses largely on constrained optimization. It begins with a substantial treatment of linear programming and proceeds to convex analysis, network flows, integer programming, quadratic programming, and convex optimization. Along the way, dynamic programming and the linear complementarity problem are touched on as well. This book aims to be the first introduction to the topic. Specific examples and concrete algorithms precede more abstract topics. Nevertheless, topics covered are developed in some depth, a large number of numerical examples worked out in detail, and many recent results are included, most notably interior-point methods. The exercises at the end of each chapter both illustrate the theory, and, in some cases, extend it. Optimization is not merely an intellectual exercise: its purpose is to solve practical problems on a computer. Accordingly, the book comes with software that implements the major algorithms studied. At this point, software for the following four algorithms is available: The two-phase simplex method The primal-dual simplex method The path-following interior-point method The homogeneous self-dual methods.GBP/LISTGBP.
Encompassing all the major topics students will encounter in courses on the subject, the authors teach both the underlying mathematical foundations and how these ideas are implemented in practice. They illustrate all the concepts with both worked examples and plenty of exercises, and, in addition, provide software so that students can try out numerical methods and so hone their skills in interpreting the results. As a result, this will make an ideal textbook for all those coming to the subject for the first time. Authors' note: A problem recently found with the software is due to a bug in Formula One, the third party commercial software package that was used for the development of the interface. It occurs when the date, currency, etc. format is set to a non-United States version. Please try setting your computer date/currency option to the United States option . The new version of Formula One, when ready, will be posted on WWW.
This book focuses largely on constrained optimization. It begins with a substantial treatment of linear programming and proceeds to convex analysis, network flows, integer programming, quadratic programming, and convex optimization. Along the way, dynamic programming and the linear complementarity problem are touched on as well. This book aims to be the first introduction to the topic. Specific examples and concrete algorithms precede more abstract topics. Nevertheless, topics covered are developed in some depth, a large number of numerical examples worked out in detail, and many recent results are included, most notably interior-point methods. The exercises at the end of each chapter both illustrate the theory, and, in some cases, extend it. Optimization is not merely an intellectual exercise: its purpose is to solve practical problems on a computer. Accordingly, the book comes with software that implements the major algorithms studied. At this point, software for the following four algorithms is available: The two-phase simplex method The primal-dual simplex method The path-following interior-point method The homogeneous self-dual methods.GBP/LISTGBP.
Linear Programming provides an in-depth look at simplex based as well as the more recent interior point techniques for solving linear programming problems. Starting with a review of the mathematical underpinnings of these approaches, the text provides details of the primal and dual simplex methods with the primal-dual, composite, and steepest edge simplex algorithms. This then is followed by a discussion of interior point techniques, including projective and affine potential reduction, primal and dual affine scaling, and path following algorithms. Also covered is the theory and solution of the linear complementarity problem using both the complementary pivot algorithm and interior point routines. A feature of the book is its early and extensive development and use of duality theory. Audience: The book is written for students in the areas of mathematics, economics, engineering and management science, and professionals who need a sound foundation in the important and dynamic discipline of linear programming.
In Linear Programming: A Modern Integrated Analysis, both boundary (simplex) and interior point methods are derived from the complementary slackness theorem and, unlike most books, the duality theorem is derived from Farkas's Lemma, which is proved as a convex separation theorem. The tedium of the simplex method is thus avoided. A new and inductive proof of Kantorovich's Theorem is offered, related to the convergence of Newton's method. Of the boundary methods, the book presents the (revised) primal and the dual simplex methods. An extensive discussion is given of the primal, dual and primal-dual affine scaling methods. In addition, the proof of the convergence under degeneracy, a bounded variable variant, and a super-linearly convergent variant of the primal affine scaling method are covered in one chapter. Polynomial barrier or path-following homotopy methods, and the projective transformation method are also covered in the interior point chapter. Besides the popular sparse Cholesky factorization and the conjugate gradient method, new methods are presented in a separate chapter on implementation. These methods use LQ factorization and iterative techniques.
This addition to the ISOR series introduces complementarity models in a straightforward and approachable manner and uses them to carry out an in-depth analysis of energy markets, including formulation issues and solution techniques. In a nutshell, complementarity models generalize: a. optimization problems via their Karush-Kuhn-Tucker conditions b. on-cooperative games in which each player may be solving a separate but related optimization problem with potentially overall system constraints (e.g., market-clearing conditions) c. conomic and engineering problems that aren't specifically derived from optimization problems (e.g., spatial price equilibria) d. roblems in which both primal and dual variables (prices) appear in the original formulation (e.g., The National Energy Modeling System (NEMS) or its precursor, PIES). As such, complementarity models are a very general and flexible modeling format. A natural question is why concentrate on energy markets for this complementarity approach? s it turns out, energy or other markets that have game theoretic aspects are best modeled by complementarity problems. The reason is that the traditional perfect competition approach no longer applies due to deregulation and restructuring of these markets and thus the corresponding optimization problems may no longer hold. Also, in some instances it is important in the original model formulation to involve both primal variables (e.g., production) as well as dual variables (e.g., market prices) for public and private sector energy planning. Traditional optimization problems can not directly handle this mixing of primal and dual variables but complementarity models can and this makes them all that more effective for decision-makers.
The effectiveness of the algorithms of linear programming in solving problems is largely dependent upon the particular applications from which these problems arise. A first course in linear programming should not only allow one to solve many different types of problems in many different contexts but should provide deeper insights into the fields in which linear programming finds its utility. To this end, the emphasis throughtout Linear Programming and Its Applications is on the acquisition of linear programming skills via the algorithmic solution of small-scale problems both in the general sense and in the specific applications where these problems naturally occur. The first part of the book deals with methods to solve general linear programming problems and discusses the theory of duality which connects these problems. The second part of the book deals with linear programming in different applications including the fields of game theory and graph theory as well as the more traditional transportation and assignment problems. The book is versatile; in as much as Linear Programming and Its Applications is intended to be used as a first course in linear programming, it is suitable for students in such varying fields as mathematics, computer science, engineering, actuarial science, and economics.
This collection of 188 nonlinear programming test examples is a supplement of the test problem collection published by Hock and Schittkowski [2]. As in the former case, the intention is to present an extensive set of nonlinear programming problems that were used by other authors in the past to develop, test or compare optimization algorithms. There is no distinction between an "easy" or "difficult" test problem, since any related classification must depend on the underlying algorithm and test design. For instance, a nonlinear least squares problem may be solved easily by a special purpose code within a few iterations, but the same problem can be unsolvable for a general nonlinear programming code due to ill-conditioning. Thus one should consider both collections as a possible offer to choose some suitable problems for a specific test frame. One difference between the new collection and the former one pub lished by Hock and Schittkowski [2], is the attempt to present some more realistic or "real world" problems. Moreover a couple of non linear least squares test problems were collected which can be used e. g. to test data fitting algorithms. The presentation of the test problems is somewhat simplified and numerical solutions are computed only by one nonlinear programming code, the sequential quadratic programming algorithm NLPQL of Schittkowski [3]. But both test problem collections are implemeted in the same way in form of special FORTRAN subroutines, so that the same test programs can be used.
|
You may like...
Optimization with LINGO-18 - Problems…
Neha Gupta, Irfan Ali
Hardcover
R2,944
Discovery Miles 29 440
Singular Differential Equations and…
Luis Manuel Braga da Costa Campos
Hardcover
R3,403
Discovery Miles 34 030
Non-Linear Programming - A Basic…
Nita H. Shah, Poonam Prakash Mishra
Hardcover
R4,720
Discovery Miles 47 200
Optimization for Data Analysis
Stephen J Wright, Benjamin Recht
Hardcover
Queueing Networks - A Fundamental…
Richard J Boucherie, Nico M. van Dijk
Hardcover
R6,279
Discovery Miles 62 790
|