0
Your cart

Your cart is empty

Browse All Departments
  • All Departments
Price
  • R1,000 - R2,500 (2)
  • -
Status
Brand

Showing 1 - 2 of 2 matches in All Departments

Discrete-Time Stochastic Control and Dynamic Potential Games - The Euler-Equation Approach (Paperback, 2013 ed.): David... Discrete-Time Stochastic Control and Dynamic Potential Games - The Euler-Equation Approach (Paperback, 2013 ed.)
David Gonzalez-Sanchez, Onesimo Hernandez-Lerma
R1,666 Discovery Miles 16 660 Ships in 10 - 15 working days

There are several techniques to study noncooperative dynamic games, such as dynamic programming and the maximum principle (also called the Lagrange method). It turns out, however, that one way to characterize dynamic potential games requires to analyze inverse optimal control problems, and it is here where the Euler equation approach comes in because it is particularly well-suited to solve inverse problems. Despite the importance of dynamic potential games, there is no systematic study about them. This monograph is the first attempt to provide a systematic, self-contained presentation of stochastic dynamic potential games.

An Introduction to Optimal Control Theory - The Dynamic Programming Approach (Hardcover, 1st ed. 2023): Onesimo... An Introduction to Optimal Control Theory - The Dynamic Programming Approach (Hardcover, 1st ed. 2023)
Onesimo Hernandez-Lerma, Leonardo Ramiro Laura-Guarachi, Saul Mendoza-Palacios, David Gonzalez-Sanchez
R1,648 R1,547 Discovery Miles 15 470 Save R101 (6%) Ships in 9 - 15 working days

This book introduces optimal control problems for large families of deterministic and stochastic systems with discrete or continuous time parameter. These families include most of the systems studied in many disciplines, including Economics, Engineering, Operations Research, and Management Science, among many others. The main objective is to give a concise, systematic, and reasonably self contained presentation of some key topics in optimal control theory. To this end, most of the analyses are based on the dynamic programming (DP) technique. This technique is applicable to almost all control problems that appear in theory and applications. They include, for instance, finite and infinite horizon control problems in which the underlying dynamic system follows either a deterministic or stochastic difference or differential equation. In the infinite horizon case, it also uses DP to study undiscounted problems, such as the ergodic or long-run average cost. After a general introduction to control problems, the book covers the topic dividing into four parts with different dynamical systems: control of discrete-time deterministic systems, discrete-time stochastic systems, ordinary differential equations, and finally a general continuous-time MCP with applications for stochastic differential equations. The first and second part should be accessible to undergraduate students with some knowledge of elementary calculus, linear algebra, and some concepts from probability theory (random variables, expectations, and so forth). Whereas the third and fourth part would be appropriate for advanced undergraduates or graduate students who have a working knowledge of mathematical analysis (derivatives, integrals, ...) and stochastic processes.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Students Must Rise - Youth Struggle In…
Anne Heffernan, Noor Nieftagodien Paperback  (1)
R308 R241 Discovery Miles 2 410
Too Beautiful To Break
Tessa Bailey Paperback R280 R224 Discovery Miles 2 240
LocknLock Pet Dry Food Container (1.6L)
R109 R91 Discovery Miles 910
The City is Mine
Niq Mhlongo Paperback R300 R149 Discovery Miles 1 490
Not available
Loot
Nadine Gordimer Paperback  (2)
R398 R330 Discovery Miles 3 300
Lucky Plastic 3-in-1 Nose Ear Trimmer…
R289 Discovery Miles 2 890
Spectra S1 Double Rechargeable Breast…
 (46)
R3,999 R2,999 Discovery Miles 29 990
Focus Office Desk Chair (Black)
R1,199 R989 Discovery Miles 9 890
Mellerware Kindle - Rechargeable Hot…
 (7)
R349 R307 Discovery Miles 3 070

 

Partners