Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Showing 1 - 6 of 6 matches in All Departments
This book covers the broad range of research in stochastic models and optimization. Applications covered include networks, financial engineering, production planning and supply chain management. Each contribution is aimed at graduate students working in operations research, probability, and statistics.
In the mathematical treatment of many problems which arise in physics, economics, engineering, management, etc., the researcher frequently faces two major difficulties: infinite dimensionality and randomness of the evolution process. Infinite dimensionality occurs when the evolution in time of a process is accompanied by a space-like dependence; for example, spatial distribution of the temperature for a heat-conductor, spatial dependence of the time-varying displacement of a membrane subject to external forces, etc. Randomness is intrinsic to the mathematical formulation of many phenomena, such as fluctuation in the stock market, or noise in communication networks. Control theory of distributed parameter systems and stochastic systems focuses on physical phenomena which are governed by partial differential equations, delay-differential equations, integral differential equations, etc., and stochastic differential equations of various types. This has been a fertile field of research with over 40 years of history, which continues to be very active under the thrust of new emerging applications. Among the subjects covered are: Control of distributed parameter systems; Stochastic control; Applications in finance/insurance/manufacturing; Adapted control; Numerical approximation . It is essential reading for applied mathematicians, control theorists, economic/financial analysts and engineers.
The maximum principle and dynamic programming are the two most commonly used approaches in solving optimal control problems. These approaches have been developed independently. The theme of this book is to unify these two approaches, and to demonstrate that the viscosity solution theory provides the framework to unify them.
In the mathematical treatment of many problems which arise in physics, economics, engineering, management, etc., the researcher frequently faces two major difficulties: infinite dimensionality and randomness of the evolution process. Infinite dimensionality occurs when the evolution in time of a process is accompanied by a space-like dependence; for example, spatial distribution of the temperature for a heat-conductor, spatial dependence of the time-varying displacement of a membrane subject to external forces, etc. Randomness is intrinsic to the mathematical formulation of many phenomena, such as fluctuation in the stock market, or noise in communication networks. Control theory of distributed parameter systems and stochastic systems focuses on physical phenomena which are governed by partial differential equations, delay-differential equations, integral differential equations, etc., and stochastic differential equations of various types. This has been a fertile field of research with over 40 years of history, which continues to be very active under the thrust of new emerging applications.Among the subjects covered are: * Control of distributed parameter systems; * Stochastic control; * Applications in finance/insurance/manufacturing; * Adapted control; * Numerical approximation . It is essential reading for applied mathematicians, control theorists, economic/financial analysts and engineers.
As is well known, Pontryagin's maximum principle and Bellman's dynamic programming are the two principal and most commonly used approaches in solving stochastic optimal control problems. * An interesting phenomenon one can observe from the literature is that these two approaches have been developed separately and independently. Since both methods are used to investigate the same problems, a natural question one will ask is the fol lowing: (Q) What is the relationship betwccn the maximum principlc and dy namic programming in stochastic optimal controls? There did exist some researches (prior to the 1980s) on the relationship between these two. Nevertheless, the results usually werestated in heuristic terms and proved under rather restrictive assumptions, which were not satisfied in most cases. In the statement of a Pontryagin-type maximum principle there is an adjoint equation, which is an ordinary differential equation (ODE) in the (finite-dimensional) deterministic case and a stochastic differential equation (SDE) in the stochastic case. The system consisting of the adjoint equa tion, the original state equation, and the maximum condition is referred to as an (extended) Hamiltonian system. On the other hand, in Bellman's dynamic programming, there is a partial differential equation (PDE), of first order in the (finite-dimensional) deterministic case and of second or der in the stochastic case. This is known as a Hamilton-Jacobi-Bellman (HJB) equation.
This books covers the broad range of research in stochastic models and optimization. Applications presented include networks, financial engineering, production planning, and supply chain management. Each contribution is aimed at graduate students working in operations research, probability, and statistics.
|
You may like...
|