0
Your cart

Your cart is empty

Browse All Departments
  • All Departments
Price
  • R1,000 - R2,500 (6)
  • R2,500 - R5,000 (1)
  • R5,000 - R10,000 (4)
  • -
Status
Brand

Showing 1 - 11 of 11 matches in All Departments

Stochastic Controls - Hamiltonian Systems and HJB Equations (Paperback, Softcover reprint of the original 1st ed. 1999):... Stochastic Controls - Hamiltonian Systems and HJB Equations (Paperback, Softcover reprint of the original 1st ed. 1999)
Jiongmin Yong, Xun Yu Zhou
R5,302 Discovery Miles 53 020 Ships in 10 - 15 working days

As is well known, Pontryagin's maximum principle and Bellman's dynamic programming are the two principal and most commonly used approaches in solving stochastic optimal control problems. * An interesting phenomenon one can observe from the literature is that these two approaches have been developed separately and independently. Since both methods are used to investigate the same problems, a natural question one will ask is the fol lowing: (Q) What is the relationship betwccn the maximum principlc and dy namic programming in stochastic optimal controls? There did exist some researches (prior to the 1980s) on the relationship between these two. Nevertheless, the results usually werestated in heuristic terms and proved under rather restrictive assumptions, which were not satisfied in most cases. In the statement of a Pontryagin-type maximum principle there is an adjoint equation, which is an ordinary differential equation (ODE) in the (finite-dimensional) deterministic case and a stochastic differential equation (SDE) in the stochastic case. The system consisting of the adjoint equa tion, the original state equation, and the maximum condition is referred to as an (extended) Hamiltonian system. On the other hand, in Bellman's dynamic programming, there is a partial differential equation (PDE), of first order in the (finite-dimensional) deterministic case and of second or der in the stochastic case. This is known as a Hamilton-Jacobi-Bellman (HJB) equation.

Optimal Control Theory for Infinite Dimensional Systems (Paperback, Softcover reprint of the original 1st ed. 1995): Xungjing... Optimal Control Theory for Infinite Dimensional Systems (Paperback, Softcover reprint of the original 1st ed. 1995)
Xungjing Li, Jiongmin Yong
R6,575 Discovery Miles 65 750 Ships in 10 - 15 working days

Infinite dimensional systems can be used to describe many phenomena in the real world. As is well known, heat conduction, properties of elastic plastic material, fluid dynamics, diffusion-reaction processes, etc., all lie within this area. The object that we are studying (temperature, displace ment, concentration, velocity, etc.) is usually referred to as the state. We are interested in the case where the state satisfies proper differential equa tions that are derived from certain physical laws, such as Newton's law, Fourier's law etc. The space in which the state exists is called the state space, and the equation that the state satisfies is called the state equation. By an infinite dimensional system we mean one whose corresponding state space is infinite dimensional. In particular, we are interested in the case where the state equation is one of the following types: partial differential equation, functional differential equation, integro-differential equation, or abstract evolution equation. The case in which the state equation is being a stochastic differential equation is also an infinite dimensional problem, but we will not discuss such a case in this book."

Forward-Backward Stochastic Differential Equations and their Applications (Paperback, 1st ed. 1999. Corr. 3rd printing 2007):... Forward-Backward Stochastic Differential Equations and their Applications (Paperback, 1st ed. 1999. Corr. 3rd printing 2007)
Jin Ma, Jiongmin Yong
R1,908 Discovery Miles 19 080 Ships in 10 - 15 working days

This book is intended to give an introduction to the theory of forwa- backward stochastic di erential equations (FBSDEs, for short) which has received strong attention in recent years because of its interesting structure and its usefulness in various applied elds. The motivation for studying FBSDEs comes originally from stochastic optimal control theory, that is, the adjoint equation in the Pontryagin-type maximum principle. The earliest version of such an FBSDE was introduced by Bismut 1] in 1973, with a decoupled form, namely, a system of a usual (forward)stochastic di erential equation and a (linear) backwardstochastic dieren tial equation (BSDE, for short). In 1983, Bensoussan 1] proved the well-posedness of general linear BSDEs by using martingale representation theorem. The r st well-posedness result for nonlinear BSDEs was proved in 1990 by Pardoux{Peng 1], while studying the general Pontryagin-type maximum principle for stochastic optimal controls. A little later, Peng 4] discovered that the adapted solution of a BSDE could be used as a pr- abilistic interpretation of the solutions to some semilinear or quasilinear parabolic partial dieren tial equations (PDE, for short), in the spirit of the well-known Feynman-Kac formula. After this, extensive study of BSDEs was initiated, and potential for its application was found in applied and t- oretical areas such as stochastic control, mathematical n ance, dieren tial geometry, to mention a few. The study of (strongly) coupled FBSDEs started in early 90s. In his Ph.

Stochastic Controls - Hamiltonian Systems and HJB Equations (Hardcover, 1999 ed.): Jiongmin Yong, Xun Yu Zhou Stochastic Controls - Hamiltonian Systems and HJB Equations (Hardcover, 1999 ed.)
Jiongmin Yong, Xun Yu Zhou
R5,539 Discovery Miles 55 390 Ships in 10 - 15 working days

The maximum principle and dynamic programming are the two most commonly used approaches in solving optimal control problems. These approaches have been developed independently. The theme of this book is to unify these two approaches, and to demonstrate that the viscosity solution theory provides the framework to unify them.

Optimal Control Theory for Infinite Dimensional Systems (Hardcover, 1995 ed.): Xungjing Li, Jiongmin Yong Optimal Control Theory for Infinite Dimensional Systems (Hardcover, 1995 ed.)
Xungjing Li, Jiongmin Yong
R6,814 Discovery Miles 68 140 Ships in 10 - 15 working days

Infinite dimensional systems can be used to describe many phenomena in the real world. As is well known, heat conduction, properties of elastic plastic material, fluid dynamics, diffusion-reaction processes, etc., all lie within this area. The object that we are studying (temperature, displace ment, concentration, velocity, etc.) is usually referred to as the state. We are interested in the case where the state satisfies proper differential equa tions that are derived from certain physical laws, such as Newton's law, Fourier's law etc. The space in which the state exists is called the state space, and the equation that the state satisfies is called the state equation. By an infinite dimensional system we mean one whose corresponding state space is infinite dimensional. In particular, we are interested in the case where the state equation is one of the following types: partial differential equation, functional differential equation, integro-differential equation, or abstract evolution equation. The case in which the state equation is being a stochastic differential equation is also an infinite dimensional problem, but we will not discuss such a case in this book."

Control Theory of Distributed Parameter Systems and Applications - Proceedings of the IFIP WG 7.2 Working Conference, Shanghai,... Control Theory of Distributed Parameter Systems and Applications - Proceedings of the IFIP WG 7.2 Working Conference, Shanghai, China, May 6-9, 1990 (Paperback)
Xunjing Li, Jiongmin Yong
R1,560 Discovery Miles 15 600 Ships in 10 - 15 working days

The IFIP-TC7, WG 7.2 Conference on Control Theory of Distributed Parameter Systems and Applications was held at Fudan University, Shanghai, China, May 6-9, 1990. The papers presented cover a wide variety of topics, e.g. the theory of identification, optimal control, stabilization, controllability, stochastic control as well as appplications in heat exchangers, elastic structures, nuclear reactor, meteorology etc.

Stochastic Linear-Quadratic Optimal Control Theory: Differential Games and Mean-Field Problems (Paperback, 1st ed. 2020):... Stochastic Linear-Quadratic Optimal Control Theory: Differential Games and Mean-Field Problems (Paperback, 1st ed. 2020)
Jingrui Sun, Jiongmin Yong
R1,939 Discovery Miles 19 390 Ships in 10 - 15 working days

This book gathers the most essential results, including recent ones, on linear-quadratic optimal control problems, which represent an important aspect of stochastic control. It presents results for two-player differential games and mean-field optimal control problems in the context of finite and infinite horizon problems, and discusses a number of new and interesting issues. Further, the book identifies, for the first time, the interconnections between the existence of open-loop and closed-loop Nash equilibria, solvability of the optimality system, and solvability of the associated Riccati equation, and also explores the open-loop solvability of mean-filed linear-quadratic optimal control problems. Although the content is largely self-contained, readers should have a basic grasp of linear algebra, functional analysis and stochastic ordinary differential equations. The book is mainly intended for senior undergraduate and graduate students majoring in applied mathematics who are interested in stochastic control theory. However, it will also appeal to researchers in other related areas, such as engineering, management, finance/economics and the social sciences.

Stochastic Linear-Quadratic Optimal Control Theory: Open-Loop and Closed-Loop Solutions (Paperback, 1st ed. 2020): Jingrui Sun,... Stochastic Linear-Quadratic Optimal Control Theory: Open-Loop and Closed-Loop Solutions (Paperback, 1st ed. 2020)
Jingrui Sun, Jiongmin Yong
R1,939 Discovery Miles 19 390 Ships in 10 - 15 working days

This book gathers the most essential results, including recent ones, on linear-quadratic optimal control problems, which represent an important aspect of stochastic control. It presents the results in the context of finite and infinite horizon problems, and discusses a number of new and interesting issues. Further, it precisely identifies, for the first time, the interconnections between three well-known, relevant issues - the existence of optimal controls, solvability of the optimality system, and solvability of the associated Riccati equation. Although the content is largely self-contained, readers should have a basic grasp of linear algebra, functional analysis and stochastic ordinary differential equations. The book is mainly intended for senior undergraduate and graduate students majoring in applied mathematics who are interested in stochastic control theory. However, it will also appeal to researchers in other related areas, such as engineering, management, finance/economics and the social sciences.

Mathematical Analysis: A Concise Introduction (Hardcover): Jiongmin Yong Mathematical Analysis: A Concise Introduction (Hardcover)
Jiongmin Yong
R2,161 Discovery Miles 21 610 Ships in 10 - 15 working days

Mathematical analysis serves as a common foundation for many research areas of pure and applied mathematics. It is also an important and powerful tool used in many other fields of science, including physics, chemistry, biology, engineering, finance, and economics. In this book, some basic theories of analysis are presented, including metric spaces and their properties, limit of sequences, continuous function, differentiation, Riemann integral, uniform convergence, and series.After going through a sequence of courses on basic calculus and linear algebra, it is desirable for one to spend a reasonable length of time (ideally, say, one semester) to build an advanced base of analysis sufficient for getting into various research fields other than analysis itself, and/or stepping into more advanced levels of analysis courses (such as real analysis, complex analysis, differential equations, functional analysis, stochastic analysis, amongst others). This book is written to meet such a demand. Readers will find the treatment of the material is as concise as possible, but still maintaining all the necessary details.

Optimization Theory: A Concise Introduction (Hardcover): Jiongmin Yong Optimization Theory: A Concise Introduction (Hardcover)
Jiongmin Yong
R2,162 Discovery Miles 21 620 Ships in 10 - 15 working days

Mathematically, most of the interesting optimization problems can be formulated to optimize some objective function, subject to some equality and/or inequality constraints. This book introduces some classical and basic results of optimization theory, including nonlinear programming with Lagrange multiplier method, the Karush-Kuhn-Tucker method, Fritz John's method, problems with convex or quasi-convex constraints, and linear programming with geometric method and simplex method.A slim book such as this which touches on major aspects of optimization theory will be very much needed for most readers. We present nonlinear programming, convex programming, and linear programming in a self-contained manner. This book is for a one-semester course for upper level undergraduate students or first/second year graduate students. It should also be useful for researchers working on many interdisciplinary areas other than optimization.

Differential Games: A Concise Introduction (Hardcover): Jiongmin Yong Differential Games: A Concise Introduction (Hardcover)
Jiongmin Yong
R2,941 Discovery Miles 29 410 Ships in 10 - 15 working days

This book uses a small volume to present the most basic results for deterministic two-person differential games. The presentation begins with optimization of a single function, followed by a basic theory for two-person games. For dynamic situations, the author first recalls control theory which is treated as single-person differential games. Then a systematic theory of two-person differential games is concisely presented, including evasion and pursuit problems, zero-sum problems and LQ differential games.The book is intended to be self-contained, assuming that the readers have basic knowledge of calculus, linear algebra, and elementary ordinary differential equations. The readership of the book could be junior/senior undergraduate and graduate students with majors related to applied mathematics, who are interested in differential games. Researchers in some other related areas, such as engineering, social science, etc. will also find the book useful.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Jumbo Jan van Haasteren Comic Jigsaw…
 (1)
R439 R299 Discovery Miles 2 990
Mediabox NEO TV Stick (Black) - Netflix…
R1,189 Discovery Miles 11 890
Sony PlayStation Portal Remote Player…
R5,299 Discovery Miles 52 990
Fast & Furious: 8-Film Collection
Vin Diesel, Paul Walker, … Blu-ray disc R433 Discovery Miles 4 330
But Here We Are
Foo Fighters CD R215 Discovery Miles 2 150
Playstation 4 Replacement Case
 (9)
R56 Discovery Miles 560
Barbie
Margot Robbie, Ryan Gosling Blu-ray disc R266 Discovery Miles 2 660
Dala Craft Pom Poms - Assorted Colours…
R36 Discovery Miles 360
Sony PlayStation 5 DualSense Wireless…
R1,599 R1,479 Discovery Miles 14 790
Bostik GluGo - Adhesive Remover (90ml)
R54 Discovery Miles 540

 

Partners