![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Calculus & mathematical analysis > Calculus of variations
Mathematics is playing an ever more important role in the physical and biologi cal sciences, provoking a blurring of boundaries between scientific disciplines and a resurgence of interest in the modem as well as the classical techniques of applied mathematics. This renewal of interest, both in research and teaching, has led to the establishment of the series Texts in Applied Mathematics (TAM). The development of new courses is a natural consequence of a high level of excitement on the research frontier as newer techniques, such as numerical and symbolic computer systems, dynamical systems, and chaos, mix with and rein force the traditional methods of applied mathematics. Thus, the purpose of this textbook series is to meet the current and future needs of these advances and to encourage the teaching of new courses. TAM will publish textbooks suitable for use in advanced undergraduate and beginning graduate courses, and will complement the Applied Mathematics Sci ences (AMS) series, which will focus on advanced textbooks and research-level monographs. v Preface This textbook introduces the basic concepts and results of mathematical control and system theory. Based on courses that I have taught during the last 15 years, it presents its subject in a self-contained and elementary fashion. It is geared primarily to an audience consisting of mathematically mature advanced undergraduate or beginning graduate students. In addi tion, it can be used by engineering students interested in a rigorous, proof oriented systems course that goes beyond the classical frequency-domain material and more applied courses."
Eugene A. Feinberg Adam Shwartz This volume deals with the theory of Markov Decision Processes (MDPs) and their applications. Each chapter was written by a leading expert in the re spective area. The papers cover major research areas and methodologies, and discuss open questions and future research directions. The papers can be read independently, with the basic notation and concepts ofSection 1.2. Most chap ters should be accessible by graduate or advanced undergraduate students in fields of operations research, electrical engineering, and computer science. 1.1 AN OVERVIEW OF MARKOV DECISION PROCESSES The theory of Markov Decision Processes-also known under several other names including sequential stochastic optimization, discrete-time stochastic control, and stochastic dynamic programming-studiessequential optimization ofdiscrete time stochastic systems. The basic object is a discrete-time stochas tic system whose transition mechanism can be controlled over time. Each control policy defines the stochastic process and values of objective functions associated with this process. The goal is to select a "good" control policy. In real life, decisions that humans and computers make on all levels usually have two types ofimpacts: (i) they cost orsavetime, money, or other resources, or they bring revenues, as well as (ii) they have an impact on the future, by influencing the dynamics. In many situations, decisions with the largest immediate profit may not be good in view offuture events. MDPs model this paradigm and provide results on the structure and existence of good policies and on methods for their calculation."
Constraint Programming is a problem-solving paradigm that establishes a clear distinction between two pivotal aspects of a problem: (1) a precise definition of the constraints that define the problem to be solved and (2) the algorithms and heuristics enabling the selection of decisions to solve the problem. It is because of these capabilities that Constraint Programming is increasingly being employed as a problem-solving tool to solve scheduling problems. Hence the development of Constraint-Based Scheduling as a field of study. The aim of this book is to provide an overview of the most widely used Constraint-Based Scheduling techniques. Following the principles of Constraint Programming, the book consists of three distinct parts: The first chapter introduces the basic principles of Constraint Programming and provides a model of the constraints that are the most often encountered in scheduling problems. Chapters 2, 3, 4, and 5 are focused on the propagation of resource constraints, which usually are responsible for the "hardness" of the scheduling problem. Chapters 6, 7, and 8 are dedicated to the resolution of several scheduling problems. These examples illustrate the use and the practical efficiency of the constraint propagation methods of the previous chapters. They also show that besides constraint propagation, the exploration of the search space must be carefully designed, taking into account specific properties of the considered problem (e.g., dominance relations, symmetries, possible use of decomposition rules). Chapter 9 mentions various extensions of the model and presents promising research directions.
The book of Professor Evtushenko describes both the theoretical foundations and the range of applications of many important methods for solving nonlinear programs. Particularly emphasized is their use for the solution of optimal control problems for ordinary differential equations. These methods were instrumented in a library of programs for an interactive system (DISO) at the Computing Center of the USSR Academy of Sciences, which can be used to solve a given complicated problem by a combination of appropriate methods in the interactive mode. Many examples show the strong as well the weak points of particular methods and illustrate the advantages gained by their combination. In fact, it is the central aim of the author to pOint out the necessity of using many techniques interactively, in order to solve more dif ficult problems. A noteworthy feature of the book for the Western reader is the frequently unorthodox analysis of many known methods in the great tradition of Russian mathematics. J. Stoer PREFACE Optimization methods are finding ever broader application in sci ence and engineering. Design engineers, automation and control systems specialists, physicists processing experimental data, eco nomists, as well as operations research specialists are beginning to employ them routinely in their work. The applications have in turn furthered vigorous development of computational techniques and engendered new directions of research. Practical implementa tion of many numerical methods of high computational complexity is now possible with the availability of high-speed large-memory digital computers."
The literature on equilibrium behavior of customers and servers in queuing systems is rich. However, there is no comprehensive survey of this field. Moreover, what has been published lacks continuity and leaves many issues uncovered. One of the main goals of this book is to review the existing literature under one cover. Other goals are to edit the known results in a unified manner, classify them and identify where and how they relate to each other, and fill in some gaps with new results. In some areas we explicitly mention open problems. We hope that this survey will motivate further research and enable researchers to identify important open problems. The models described in this book have numerous applications. Many examples can be found in the cited papers, but we have chosen not to include applications in the book. Many of the ideas described in this book are special cases of general principles in Economics and Game Theory. We often cite references that contain more general treatment of a subject, but we do not go into the details. we have highlighted the results For each topic covered in the book, that, in our opinion, are the most important. We also present a brief discussion of related results. The content of each chapter is briefly de scribed below. Chapter 1 is an introduction. It contains basic definitions, models and solution concepts which will be used frequently throughout the book.
The ASI on Nonlinear Model Based Process Control (August 10-20, 1997~ Antalya - Turkey) convened as a continuation of a previous ASI which was held in August 1994 in Antalya on Methods of Model Based Process Control in a more general context. In 1994, the contributions and discussions convincingly showed that industrial process control would increasingly rely on nonlinear model based control systems. Therefore, the idea for organizing this ASI was motivated by the success of the first one, the enthusiasm expressed by the scientific community for continuing contact, and the growing incentive for on-line control algorithms for nonlinear processes. This is due to tighter constraints and constantly changing performance objectives that now force the processes to be operated over a wider range of conditions compared to the past, and the fact that many of industrial operations are nonlinear in nature. The ASI intended to review in depth and in a global way the state-of-the-art in nonlinear model based control. The list of lecturers consisted of 12 eminent scientists leading the principal developments in the area, as well as industrial specialists experienced in the application of these techniques. Selected out of a large number of applications, there was a high quality, active audience composed of 59 students from 20 countries. Including family members accompanying the participants, the group formed a large body of92 persons. Out of the 71 participants, 11 were from industry.
This book presents a unified view of modelling, simulation, and control of non linear dynamical systems using soft computing techniques and fractal theory. Our particular point of view is that modelling, simulation, and control are problems that cannot be considered apart, because they are intrinsically related in real world applications. Control of non-linear dynamical systems cannot be achieved if we don't have the appropriate model for the system. On the other hand, we know that complex non-linear dynamical systems can exhibit a wide range of dynamic behaviors ( ranging from simple periodic orbits to chaotic strange attractors), so the problem of simulation and behavior identification is a very important one. Also, we want to automate each of these tasks because in this way it is more easy to solve a particular problem. A real world problem may require that we use modelling, simulation, and control, to achieve the desired level of performance needed for the particular application."
Semidefinite programming (SDP) is one of the most exciting and active research areas in optimization. It has and continues to attract researchers with very diverse backgrounds, including experts in convex programming, linear algebra, numerical optimization, combinatorial optimization, control theory, and statistics. This tremendous research activity has been prompted by the discovery of important applications in combinatorial optimization and control theory, the development of efficient interior-point algorithms for solving SDP problems, and the depth and elegance of the underlying optimization theory. The Handbook of Semidefinite Programming offers an advanced and broad overview of the current state of the field. It contains nineteen chapters written by the leading experts on the subject. The chapters are organized in three parts: Theory, Algorithms, and Applications and Extensions.
Problems with multiple objectives and criteria are generally known as multiple criteria optimization or multiple criteria decision-making (MCDM) problems. So far, these types of problems have typically been modelled and solved by means of linear programming. However, many real-life phenomena are of a nonlinear nature, which is why we need tools for nonlinear programming capable of handling several conflicting or incommensurable objectives. In this case, methods of traditional single objective optimization and linear programming are not enough; we need new ways of thinking, new concepts, and new methods - nonlinear multiobjective optimization. Nonlinear Multiobjective Optimization provides an extensive, up-to-date, self-contained and consistent survey, review of the literature and of the state of the art on nonlinear (deterministic) multiobjective optimization, its methods, its theory and its background. The amount of literature on multiobjective optimization is immense. The treatment in this book is based on approximately 1500 publications in English printed mainly after the year 1980. Problems related to real-life applications often contain irregularities and nonsmoothnesses. The treatment of nondifferentiable multiobjective optimization in the literature is rather rare. For this reason, this book contains material about the possibilities, background, theory and methods of nondifferentiable multiobjective optimization as well. This book is intended for both researchers and students in the areas of (applied) mathematics, engineering, economics, operations research and management science; it is meant for both professionals and practitioners in many different fields of application. The intention has been to provide a consistent summary that may help in selecting an appropriate method for the problem to be solved. It is hoped the extensive bibliography will be of value to researchers.
Lnear prediction theory and the related algorithms have matured to the point where they now form an integral part of many real-world adaptive systems. When it is necessary to extract information from a random process, we are frequently faced with the problem of analyzing and solving special systems of linear equations. In the general case these systems are overdetermined and may be characterized by additional properties, such as update and shift-invariance properties. Usually, one employs exact or approximate least-squares methods to solve the resulting class of linear equations. Mainly during the last decade, researchers in various fields have contributed techniques and nomenclature for this type of least-squares problem. This body of methods now constitutes what we call the theory of linear prediction. The immense interest that it has aroused clearly emerges from recent advances in processor technology, which provide the means to implement linear prediction algorithms, and to operate them in real time. The practical effect is the occurrence of a new class of high-performance adaptive systems for control, communications and system identification applications. This monograph presumes a background in discrete-time digital signal processing, including Z-transforms, and a basic knowledge of discrete-time random processes. One of the difficulties I have en countered while writing this book is that many engineers and computer scientists lack knowledge of fundamental mathematics and geometry."
This book may be regarded as consisting of two parts. In Chapters I-IV we pre sent what we regard as essential topics in an introduction to deterministic optimal control theory. This material has been used by the authors for one semester graduate-level courses at Brown University and the University of Kentucky. The simplest problem in calculus of variations is taken as the point of departure, in Chapter I. Chapters II, III, and IV deal with necessary conditions for an opti mum, existence and regularity theorems for optimal controls, and the method of dynamic programming. The beginning reader may find it useful first to learn the main results, corollaries, and examples. These tend to be found in the earlier parts of each chapter. We have deliberately postponed some difficult technical proofs to later parts of these chapters. In the second part of the book we give an introduction to stochastic optimal control for Markov diffusion processes. Our treatment follows the dynamic pro gramming method, and depends on the intimate relationship between second order partial differential equations of parabolic type and stochastic differential equations. This relationship is reviewed in Chapter V, which may be read inde pendently of Chapters I-IV. Chapter VI is based to a considerable extent on the authors' work in stochastic control since 1961. It also includes two other topics important for applications, namely, the solution to the stochastic linear regulator and the separation principle.
Arc Routing: Theory, Solutions and Applications is about arc traversal and the wide variety of arc routing problems, which has had its foundations in the modern graph theory work of Leonhard Euler. Arc routing methods and computation has become a fundamental optimization concept in operations research and has numerous applications in transportation, telecommunications, manufacturing, the Internet, and many other areas of modern life. The book draws from a variety of sources including the traveling salesman problem (TSP) and graph theory, which are used and studied by operations research, engineers, computer scientists, and mathematicians. In the last ten years or so, there has been extensive coverage of arc routing problems in the research literature, especially from a graph theory perspective; however, the field has not had the benefit of a uniform, systematic treatment. With this book, there is now a single volume that focuses on state-of-the-art exposition of arc routing problems, that explores its graph theoretical foundations, and that presents a number of solution methodologies in a variety of application settings. Moshe Dror has succeeded in working with an elite group of ARC routing scholars to develop the highest quality treatment of the current state-of-the-art in arc routing.
There have been significant developments in the theory and practice of combinatorial optimization in the last 15 years. This progress has been evidenced by a continuously increasing number of international and local conferences, books and papers in this area. This book is also another contribution to this burgeoning area of operations research and optimization. This volume contains the contributions of the participants of the recent NATO Ad vanced Study Institute, New Frontiers in the Theory and Practice of Combinatorial Op timization, which was held at the campus of Bilkent University, in Ankara, Turkey, July 16-29, 1990. In this conference, we brought many prominent researchers and young and promising scientists together to discuss current and future trends in the theory and prac tice of combinatorial optimization. The Bilkent campus was an excellent environment for such an undertaking. Being outside of Ankara, the capital of Turkey, Bilkent University gave the participants a great opportunity for exchanging ideas and discussing new theories and applications without much distraction. One of the primary goals of NATO ASIs is to bring together a group of scientists and research scientists primarily from the NATO countries for the dissemination of ad vanced scientific knowledge and the promotion of international contacts among scientists. We believe that we accomplished this mission very successfully by bringing together 15 prominent lecturers and 45 promising young scientists from 12 countries, in a university environment for 14 days of intense lectures, presentations and discussions.
The book is devoted to systems with discontinuous control. The study of discontinuous dynamic systems is a multifacet problem which embraces mathematical, control theoretic and application aspects. Times and again, this problem has been approached by mathematicians, physicists and engineers, each profession treating it from its own positions. Interestingly, the results obtained by specialists in different disciplines have almost always had a significant effect upon the development of the control theory. It suffices to mention works on the theory of oscillations of discontinuous nonlinear systems, mathematical studies in ordinary differential equations with discontinuous righthand parts or variational problems in nonclassic statements. The unremitting interest to discontinuous control systems enhanced by their effective application to solution of problems most diverse in their physical nature and functional purpose is, in the author's opinion, a cogent argument in favour of the importance of this area of studies. It seems a useful effort to consider, from a control theoretic viewpoint, the mathematical and application aspects of the theory of discontinuous dynamic systems and determine their place within the scope of the present-day control theory. The first attempt was made by the author in 1975-1976 in his course on "The Theory of Discontinuous Dynamic Systems" and "The Theory of Variable Structure Systems" read to post-graduates at the University of Illinois, USA, and then presented in 1978-1979 at the seminars held in the Laboratory of Systems with Discontinous Control at the Institute of Control Sciences in Moscow.
The primary aim of this monograph is to provide a formal framework for the representation and management of uncertainty and vagueness in the field of artificial intelligence. It puts particular emphasis on a thorough analysis of these phenomena and on the development of sound mathematical modeling approaches. Beyond this theoretical basis the scope of the book includes also implementational aspects and a valuation of existing models and systems. The fundamental ambition of this book is to show that vagueness and un certainty can be handled adequately by using measure-theoretic methods. The presentation of applicable knowledge representation formalisms and reasoning algorithms substantiates the claim that efficiency requirements do not necessar ily require renunciation of an uncompromising mathematical modeling. These results are used to evaluate systems based on probabilistic methods as well as on non-standard concepts such as certainty factors, fuzzy sets or belief functions. The book is intended to be self-contained and addresses researchers and practioneers in the field of knowledge based systems. It is in particular suit able as a textbook for graduate-level students in AI, operations research and applied probability. A solid mathematical background is necessary for reading this book. Essential parts of the material have been the subject of courses given by the first author for students of computer science and mathematics held since 1984 at the University in Braunschweig."
One of the basic tenets of science is that deterministic systems are completely predictable-given the initial condition and the equations describing a system, the behavior of the system can be predicted 1 for all time. The discovery of chaotic systems has eliminated this viewpoint. Simply put, a chaotic system is a deterministic system that exhibits random behavior. Though identified as a robust phenomenon only twenty years ago, chaos has almost certainly been encountered by scientists and engi neers many times during the last century only to be dismissed as physical noise. Chaos is such a wide-spread phenomenon that it has now been reported in virtually every scientific discipline: astronomy, biology, biophysics, chemistry, engineering, geology, mathematics, medicine, meteorology, plasmas, physics, and even the social sci ences. It is no coincidence that during the same two decades in which chaos has grown into an independent field of research, computers have permeated society. It is, in fact, the wide availability of inex pensive computing power that has spurred much of the research in chaotic dynamics. The reason is simple: the computer can calculate a solution of a nonlinear system. This is no small feat. Unlike lin ear systems, where closed-form solutions can be written in terms of the system's eigenvalues and eigenvectors, few nonlinear systems and virtually no chaotic systems possess closed-form solutions."
It is not an exaggeration to state that most problems dealt with in economic theory can be formulated as problems in optimization theory. This holds true for the paradigm of "behavioral" optimization in the pursuit of individual self interests and societally efficient resource allocation, as well as for equilibrium paradigms where existence and stability problems in dynamics can often be stated as "potential" problems in optimization. For this reason, books in mathematical economics and in mathematics for economists devote considerable attention to optimization theory. However, with very few exceptions, the reader who is interested in further study is left with the impression that there is no further place to go to and that what is in these second hand sources is all these is available as far as the subject of optimization theory is concerned. On the other hand the main results from mathematics are often carelessly stated or, more often than not, they do not get to be formally stated at all. Furthermore, it should be well understood that economic theory in general and, mathematical economics in particular, must be classified as special types of applied mathematics or, more precisely, of motivated mathematics since tools of mathematical analysis are used to prove theorems in an economics context in the manner in which probability theory may be classified. Hence, rigor and correct scholarship are of utmost importance and can not be subject to compromise.
This book collects some recent developments in stochastic control theory with applications to financial mathematics. We first address standard stochastic control problems from the viewpoint of the recently developed weak dynamic programming principle. A special emphasis is put on the regularity issues and, in particular, on the behavior of the value function near the boundary. We then provide a quick review of the main tools from viscosity solutions which allow to overcome all regularity problems. We next address the class of stochastic target problems which extends in a nontrivial way the standard stochastic control problems. Here the theory of viscosity solutions plays a crucial role in the derivation of the dynamic programming equation as the infinitesimal counterpart of the corresponding geometric dynamic programming equation. The various developments of this theory have been stimulated by applications in finance and by relevant connections with geometric flows. Namely, the second order extension was motivated by illiquidity modeling, and the controlled loss version was introduced following the problem of quantile hedging. The third part specializes to an overview of Backward stochastic differential equations, and their extensions to the quadratic case. "
Complementarity theory is a new domain in applied mathematics and is concerned with the study of complementarity problems. These problems represent a wide class of mathematical models related to optimization, game theory, economic engineering, mechanics, fluid mechanics, stochastic optimal control etc. The book is dedicated to the study of nonlinear complementarity problems by topological methods. Audience: Mathematicians, engineers, economists, specialists working in operations research and anybody interested in applied mathematics or in mathematical modeling.
This book provides a systematic and comprehensive account of asymptotic sets and functions from which a broad and useful theory emerges in the areas of optimization and variational inequalities. A variety of motivations leads mathematicians to study questions about attainment of the infimum in a minimization problem and its stability, duality and minmax theorems, convexification of sets and functions, and maximal monotone maps. For each there is the central problem of handling unbounded situations. Such problems arise in theory but also within the development of numerical methods. The book focuses on the notions of asymptotic cones and associated asymptotic functions that provide a natural and unifying framework for the resolution of these types of problems. These notions have been used largely and traditionally in convex analysis, yet these concepts play a prominent and independent role in both convex and nonconvex analysis. This book covers convex and nonconvex problems, offering detailed analysis and techniques that go beyond traditional approaches. The book will serve as a useful reference and self-contained text for researchers and graduate students in the fields of modern optimization theory and nonlinear analysis.
As our title reveals, we focus on optimal control methods and applications relevant to linear dynamic economic systems in discrete-time variables. We deal only with discrete cases simply because economic data are available in discrete forms, hence realistic economic policies should be established in discrete-time structures. Though many books have been written on optimal control in engineering, we see few on discrete-type optimal control. More over, since economic models take slightly different forms than do engineer ing ones, we need a comprehensive, self-contained treatment of linear optimal control applicable to discrete-time economic systems. The present work is intended to fill this need from the standpoint of contemporary macroeconomic stabilization. The work is organized as follows. In Chapter 1 we demonstrate instru ment instability in an economic stabilization problem and thereby establish the motivation for our departure into the optimal control world. Chapter 2 provides fundamental concepts and propositions for controlling linear deterministic discrete-time systems, together with some economic applica tions and numerical methods. Our optimal control rules are in the form of feedback from known state variables of the preceding period. When state variables are not observable or are accessible only with observation errors, we must obtain appropriate proxies for these variables, which are called "observers" in deterministic cases or "filters" in stochastic circumstances. In Chapters 3 and 4, respectively, Luenberger observers and Kalman filters are discussed, developed, and applied in various directions. Noticing that a separation principle lies between observer (or filter) and controller (cf."
Multilevel decision theory arises to resolve the contradiction between increasing requirements towards the process of design, synthesis, control and management of complex systems and the limitation of the power of technical, control, computer and other executive devices, which have to perform actions and to satisfy requirements in real time. This theory rises suggestions how to replace the centralised management of the system by hierarchical co-ordination of sub-processes. All sub-processes have lower dimensions, which support easier management and decision making. But the sub-processes are interconnected and they influence each other. Multilevel systems theory supports two main methodological tools: decomposition and co-ordination. Both have been developed, and implemented in practical applications concerning design, control and management of complex systems. In general, it is always beneficial to find the best or optimal solution in processes of system design, control and management. The real tendency towards the best (optimal) decision requires to present all activities in the form of a definition and then the solution of an appropriate optimization problem. Every optimization process needs the mathematical definition and solution of a well stated optimization problem. These problems belong to two classes: static optimization and dynamic optimization. Static optimization problems are solved applying methods of mathematical programming: conditional and unconditional optimization. Dynamic optimization problems are solved by methods of variation calculus: Euler Lagrange method; maximum principle; dynamical programming.
My original introduction to this subject was through conservations, and ultimate ly joint work with C. A. Micchelli. I am grateful to him and to Profs. C. de Boor, E. W. Cheney, S. D. Fisher and A. A. Melkman who read various portions of the manuscript and whose suggestions were most helpful. Errors in accuracy and omissions are totally my responsibility. I would like to express my appreciation to the SERC of Great Britain and to the Department of Mathematics of the University of Lancaster for the year spent there during which large portions of the manuscript were written, and also to the European Research Office of the U.S. Army for its financial support of my research endeavors. Thanks are also due to Marion Marks who typed portions of the manuscript. Haifa, 1984 Allan Pinkus Table of Contents 1 Chapter I. Introduction . . . . . . . . Chapter II. Basic Properties of n-Widths . 9 1. Properties of d * * * * * * * * * * 9 n 15 2. Existence of Optimal Subspaces for d * n n 17 3. Properties of d * * * * * * 20 4. Properties of b * * * * * * n 5. Inequalities Between n-Widths 22 n 6. Duality Between d and d * * 27 n 7. n-Widths of Mappings of the Unit Ball 29 8. Some Relationships Between dn(T), dn(T) and bn(T) . 32 37 Notes and References . . . . . . . . . . . . . .
Ever since the discovery of the five platonic solids in ancient times, the study of symmetry and regularity has been one of the most fascinating aspects of mathematics. Quite often the arithmetical regularity properties of an object imply its uniqueness and the existence of many symmetries. This interplay between regularity and symmetry properties of graphs is the theme of this book. Starting from very elementary regularity properties, the concept of a distance-regular graph arises naturally as a common setting for regular graphs which are extremal in one sense or another. Several other important regular combinatorial structures are then shown to be equivalent to special families of distance-regular graphs. Other subjects of more general interest, such as regularity and extremal properties in graphs, association schemes, representations of graphs in euclidean space, groups and geometries of Lie type, groups acting on graphs, and codes are covered independently. Many new results and proofs and more than 750 references increase the encyclopaedic value of this book.
This monograph deals with various classes of deterministic and stochastic continuous time optimal control problems that are defined over unbounded time intervals. For these problems the performance criterion is described by an improper integral and it is possible that, when evaluated at a given admissible element, this criterion is unbounded. To cope with this divergence new optimality concepts, referred to here as overtaking optimality, weakly overtaking optimality, agreeable plans, etc. , have been proposed. The motivation for studying these problems arises primarily from the economic and biological sciences where models of this type arise naturally. Indeed, any bound placed on the time hori zon is artificial when one considers the evolution of the state of an economy or species. The responsibility for the introduction of this interesting class of problems rests with the economists who first studied them in the modeling of capital accumulation processes. Perhaps the earliest of these was F. Ramsey [152] who, in his seminal work on the theory of saving in 1928, considered a dynamic optimization model defined on an infinite time horizon. Briefly, this problem can be described as a Lagrange problem with unbounded time interval. The advent of modern control theory, particularly the formulation of the famous Maximum Principle of Pontryagin, has had a considerable impact on the treat ment of these models as well as optimization theory in general. |
![]() ![]() You may like...
The Legend of Korra: The Art of the…
Michael Dante Dimartino, Bryan Konietzko
Hardcover
The Andy Warhol Catalogue Raisonne…
Andy Warhol Foundation
Hardcover
R10,454
Discovery Miles 104 540
|