Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Showing 1 - 10 of 10 matches in All Departments
The maximum principle and dynamic programming are the two most commonly used approaches in solving optimal control problems. These approaches have been developed independently. The theme of this book is to unify these two approaches, and to demonstrate that the viscosity solution theory provides the framework to unify them.
Infinite dimensional systems can be used to describe many phenomena in the real world. As is well known, heat conduction, properties of elastic plastic material, fluid dynamics, diffusion-reaction processes, etc., all lie within this area. The object that we are studying (temperature, displace ment, concentration, velocity, etc.) is usually referred to as the state. We are interested in the case where the state satisfies proper differential equa tions that are derived from certain physical laws, such as Newton's law, Fourier's law etc. The space in which the state exists is called the state space, and the equation that the state satisfies is called the state equation. By an infinite dimensional system we mean one whose corresponding state space is infinite dimensional. In particular, we are interested in the case where the state equation is one of the following types: partial differential equation, functional differential equation, integro-differential equation, or abstract evolution equation. The case in which the state equation is being a stochastic differential equation is also an infinite dimensional problem, but we will not discuss such a case in this book."
Mathematical analysis serves as a common foundation for many research areas of pure and applied mathematics. It is also an important and powerful tool used in many other fields of science, including physics, chemistry, biology, engineering, finance, and economics. In this book, some basic theories of analysis are presented, including metric spaces and their properties, limit of sequences, continuous function, differentiation, Riemann integral, uniform convergence, and series.After going through a sequence of courses on basic calculus and linear algebra, it is desirable for one to spend a reasonable length of time (ideally, say, one semester) to build an advanced base of analysis sufficient for getting into various research fields other than analysis itself, and/or stepping into more advanced levels of analysis courses (such as real analysis, complex analysis, differential equations, functional analysis, stochastic analysis, amongst others). This book is written to meet such a demand. Readers will find the treatment of the material is as concise as possible, but still maintaining all the necessary details.
Mathematically, most of the interesting optimization problems can be formulated to optimize some objective function, subject to some equality and/or inequality constraints. This book introduces some classical and basic results of optimization theory, including nonlinear programming with Lagrange multiplier method, the Karush-Kuhn-Tucker method, Fritz John's method, problems with convex or quasi-convex constraints, and linear programming with geometric method and simplex method.A slim book such as this which touches on major aspects of optimization theory will be very much needed for most readers. We present nonlinear programming, convex programming, and linear programming in a self-contained manner. This book is for a one-semester course for upper level undergraduate students or first/second year graduate students. It should also be useful for researchers working on many interdisciplinary areas other than optimization.
This book uses a small volume to present the most basic results for deterministic two-person differential games. The presentation begins with optimization of a single function, followed by a basic theory for two-person games. For dynamic situations, the author first recalls control theory which is treated as single-person differential games. Then a systematic theory of two-person differential games is concisely presented, including evasion and pursuit problems, zero-sum problems and LQ differential games.The book is intended to be self-contained, assuming that the readers have basic knowledge of calculus, linear algebra, and elementary ordinary differential equations. The readership of the book could be junior/senior undergraduate and graduate students with majors related to applied mathematics, who are interested in differential games. Researchers in some other related areas, such as engineering, social science, etc. will also find the book useful.
As is well known, Pontryagin's maximum principle and Bellman's dynamic programming are the two principal and most commonly used approaches in solving stochastic optimal control problems. * An interesting phenomenon one can observe from the literature is that these two approaches have been developed separately and independently. Since both methods are used to investigate the same problems, a natural question one will ask is the fol lowing: (Q) What is the relationship betwccn the maximum principlc and dy namic programming in stochastic optimal controls? There did exist some researches (prior to the 1980s) on the relationship between these two. Nevertheless, the results usually werestated in heuristic terms and proved under rather restrictive assumptions, which were not satisfied in most cases. In the statement of a Pontryagin-type maximum principle there is an adjoint equation, which is an ordinary differential equation (ODE) in the (finite-dimensional) deterministic case and a stochastic differential equation (SDE) in the stochastic case. The system consisting of the adjoint equa tion, the original state equation, and the maximum condition is referred to as an (extended) Hamiltonian system. On the other hand, in Bellman's dynamic programming, there is a partial differential equation (PDE), of first order in the (finite-dimensional) deterministic case and of second or der in the stochastic case. This is known as a Hamilton-Jacobi-Bellman (HJB) equation.
Infinite dimensional systems can be used to describe many phenomena in the real world. As is well known, heat conduction, properties of elastic plastic material, fluid dynamics, diffusion-reaction processes, etc., all lie within this area. The object that we are studying (temperature, displace ment, concentration, velocity, etc.) is usually referred to as the state. We are interested in the case where the state satisfies proper differential equa tions that are derived from certain physical laws, such as Newton's law, Fourier's law etc. The space in which the state exists is called the state space, and the equation that the state satisfies is called the state equation. By an infinite dimensional system we mean one whose corresponding state space is infinite dimensional. In particular, we are interested in the case where the state equation is one of the following types: partial differential equation, functional differential equation, integro-differential equation, or abstract evolution equation. The case in which the state equation is being a stochastic differential equation is also an infinite dimensional problem, but we will not discuss such a case in this book."
The IFIP-TC7, WG 7.2 Conference on Control Theory of Distributed Parameter Systems and Applications was held at Fudan University, Shanghai, China, May 6-9, 1990. The papers presented cover a wide variety of topics, e.g. the theory of identification, optimal control, stabilization, controllability, stochastic control as well as appplications in heat exchangers, elastic structures, nuclear reactor, meteorology etc.
This book gathers the most essential results, including recent ones, on linear-quadratic optimal control problems, which represent an important aspect of stochastic control. It presents results for two-player differential games and mean-field optimal control problems in the context of finite and infinite horizon problems, and discusses a number of new and interesting issues. Further, the book identifies, for the first time, the interconnections between the existence of open-loop and closed-loop Nash equilibria, solvability of the optimality system, and solvability of the associated Riccati equation, and also explores the open-loop solvability of mean-filed linear-quadratic optimal control problems. Although the content is largely self-contained, readers should have a basic grasp of linear algebra, functional analysis and stochastic ordinary differential equations. The book is mainly intended for senior undergraduate and graduate students majoring in applied mathematics who are interested in stochastic control theory. However, it will also appeal to researchers in other related areas, such as engineering, management, finance/economics and the social sciences.
This book gathers the most essential results, including recent ones, on linear-quadratic optimal control problems, which represent an important aspect of stochastic control. It presents the results in the context of finite and infinite horizon problems, and discusses a number of new and interesting issues. Further, it precisely identifies, for the first time, the interconnections between three well-known, relevant issues - the existence of optimal controls, solvability of the optimality system, and solvability of the associated Riccati equation. Although the content is largely self-contained, readers should have a basic grasp of linear algebra, functional analysis and stochastic ordinary differential equations. The book is mainly intended for senior undergraduate and graduate students majoring in applied mathematics who are interested in stochastic control theory. However, it will also appeal to researchers in other related areas, such as engineering, management, finance/economics and the social sciences.
|
You may like...
Enslaved Women in America - An…
Daina Ramey Berry, Deleso A Alford
Hardcover
R3,193
Discovery Miles 31 930
The Complete Works in Verse and Prose of…
Alexander Balloch Grosart
Paperback
R777
Discovery Miles 7 770
|