0
Your cart

Your cart is empty

Browse All Departments
  • All Departments
Price
  • R1,000 - R2,500 (6)
  • R2,500 - R5,000 (7)
  • -
Status
Brand

Showing 1 - 13 of 13 matches in All Departments

Discrete-Time Stochastic Control and Dynamic Potential Games - The Euler-Equation Approach (Paperback, 2013 ed.): David... Discrete-Time Stochastic Control and Dynamic Potential Games - The Euler-Equation Approach (Paperback, 2013 ed.)
David Gonzalez-Sanchez, Onesimo Hernandez-Lerma
R1,666 Discovery Miles 16 660 Ships in 10 - 15 working days

There are several techniques to study noncooperative dynamic games, such as dynamic programming and the maximum principle (also called the Lagrange method). It turns out, however, that one way to characterize dynamic potential games requires to analyze inverse optimal control problems, and it is here where the Euler equation approach comes in because it is particularly well-suited to solve inverse problems. Despite the importance of dynamic potential games, there is no systematic study about them. This monograph is the first attempt to provide a systematic, self-contained presentation of stochastic dynamic potential games.

Markov Chains and Invariant Probabilities (Paperback, Softcover reprint of the original 1st ed. 2003): Onesimo Hernandez-Lerma,... Markov Chains and Invariant Probabilities (Paperback, Softcover reprint of the original 1st ed. 2003)
Onesimo Hernandez-Lerma, Jean B. Lasserre
R1,536 Discovery Miles 15 360 Ships in 10 - 15 working days

This book is about discrete-time, time-homogeneous, Markov chains (Mes) and their ergodic behavior. To this end, most of the material is in fact about stable Mes, by which we mean Mes that admit an invariant probability measure. To state this more precisely and give an overview of the questions we shall be dealing with, we will first introduce some notation and terminology. Let (X,B) be a measurable space, and consider a X-valued Markov chain ~. = {~k' k = 0, 1, ... } with transition probability function (t.pJ.) P(x, B), i.e., P(x, B) := Prob (~k+1 E B I ~k = x) for each x E X, B E B, and k = 0,1, .... The Me ~. is said to be stable if there exists a probability measure (p.m.) /.l on B such that (*) VB EB. /.l(B) = Ix /.l(dx) P(x, B) If (*) holds then /.l is called an invariant p.m. for the Me ~. (or the t.p.f. P).

Adaptive Markov Control Processes (Paperback, Softcover reprint of the original 1st ed. 1989): Onesimo Hernandez-Lerma Adaptive Markov Control Processes (Paperback, Softcover reprint of the original 1st ed. 1989)
Onesimo Hernandez-Lerma
R1,517 Discovery Miles 15 170 Ships in 10 - 15 working days

This book is concerned with a class of discrete-time stochastic control processes known as controlled Markov processes (CMP's), also known as Markov decision processes or Markov dynamic programs. Starting in the mid-1950swith Richard Bellman, many contributions to CMP's have been made, and applications to engineering, statistics and operations research, among other areas, have also been developed. The purpose of this book is to present some recent developments on the theory of adaptive CMP's, i. e., CMP's that depend on unknown parameters. Thus at each decision time, the controller or decision-maker must estimate the true parameter values, and then adapt the control actions to the estimated values. We do not intend to describe all aspects of stochastic adaptive control; rather, the selection of material reflects our own research interests. The prerequisite for this book is a knowledgeof real analysis and prob ability theory at the level of, say, Ash (1972) or Royden (1968), but no previous knowledge of control or decision processes is required. The pre sentation, on the other hand, is meant to beself-contained, in the sensethat whenever a result from analysisor probability is used, it is usually stated in full and references are supplied for further discussion, if necessary. Several appendices are provided for this purpose. The material is divided into six chapters. Chapter 1 contains the basic definitions about the stochastic control problems we are interested in; a brief description of some applications is also provided."

Further Topics on Discrete-Time Markov Control Processes (Paperback, Softcover reprint of the original 1st ed. 1999): Onesimo... Further Topics on Discrete-Time Markov Control Processes (Paperback, Softcover reprint of the original 1st ed. 1999)
Onesimo Hernandez-Lerma, Jean B. Lasserre
R4,231 Discovery Miles 42 310 Ships in 10 - 15 working days

Devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes, the text is mainly confined to MCPs with Borel state and control spaces. Although the book follows on from the author's earlier work, an important feature of this volume is that it is self-contained and can thus be read independently of the first. The control model studied is sufficiently general to include virtually all the usual discrete-time stochastic control models that appear in applications to engineering, economics, mathematical population processes, operations research, and management science.

Discrete-Time Markov Control Processes - Basic Optimality Criteria (Paperback, Softcover reprint of the original 1st ed. 1996):... Discrete-Time Markov Control Processes - Basic Optimality Criteria (Paperback, Softcover reprint of the original 1st ed. 1996)
Onesimo Hernandez-Lerma, Jean B. Lasserre
R4,466 Discovery Miles 44 660 Ships in 10 - 15 working days

This book presents the first part of a planned two-volume series devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes (MCPs). Interest is mainly confined to MCPs with Borel state and control (or action) spaces, and possibly unbounded costs and noncompact control constraint sets. MCPs are a class of stochastic control problems, also known as Markov decision processes, controlled Markov processes, or stochastic dynamic pro grams; sometimes, particularly when the state space is a countable set, they are also called Markov decision (or controlled Markov) chains. Regardless of the name used, MCPs appear in many fields, for example, engineering, economics, operations research, statistics, renewable and nonrenewable re source management, (control of) epidemics, etc. However, most of the lit erature (say, at least 90%) is concentrated on MCPs for which (a) the state space is a countable set, and/or (b) the costs-per-stage are bounded, and/or (c) the control constraint sets are compact. But curiously enough, the most widely used control model in engineering and economics--namely the LQ (Linear system/Quadratic cost) model-satisfies none of these conditions. Moreover, when dealing with "partially observable" systems) a standard approach is to transform them into equivalent "completely observable" sys tems in a larger state space (in fact, a space of probability measures), which is uncountable even if the original state process is finite-valued.

Continuous-Time Markov Decision Processes - Theory and Applications (Paperback, 2009 ed.): Xianping Guo, Onesimo Hernandez-Lerma Continuous-Time Markov Decision Processes - Theory and Applications (Paperback, 2009 ed.)
Xianping Guo, Onesimo Hernandez-Lerma
R3,212 Discovery Miles 32 120 Ships in 10 - 15 working days

Continuous-time Markov decision processes (MDPs), also known as controlled Markov chains, are used for modeling decision-making problems that arise in operations research (for instance, inventory, manufacturing, and queueing systems), computer science, communications engineering, control of populations (such as fisheries and epidemics), and management science, among many other fields. This volume provides a unified, systematic, self-contained presentation of recent developments on the theory and applications of continuous-time MDPs. The MDPs in this volume include most of the cases that arise in applications, because they allow unbounded transition and reward/cost rates. Much of the material appears for the first time in book form.

Continuous-Time Markov Decision Processes - Theory and Applications (Hardcover, 2009 ed.): Xianping Guo, Onesimo Hernandez-Lerma Continuous-Time Markov Decision Processes - Theory and Applications (Hardcover, 2009 ed.)
Xianping Guo, Onesimo Hernandez-Lerma
R3,366 Discovery Miles 33 660 Ships in 10 - 15 working days

Continuous-time Markov decision processes (MDPs), also known as controlled Markov chains, are used for modeling decision-making problems that arise in operations research (for instance, inventory, manufacturing, and queueing systems), computer science, communications engineering, control of populations (such as fisheries and epidemics), and management science, among many other fields. This volume provides a unified, systematic, self-contained presentation of recent developments on the theory and applications of continuous-time MDPs. The MDPs in this volume include most of the cases that arise in applications, because they allow unbounded transition and reward/cost rates. Much of the material appears for the first time in book form.

Markov Chains and Invariant Probabilities (Hardcover, 2003 ed.): Onesimo Hernandez-Lerma, Jean B. Lasserre Markov Chains and Invariant Probabilities (Hardcover, 2003 ed.)
Onesimo Hernandez-Lerma, Jean B. Lasserre
R1,694 Discovery Miles 16 940 Ships in 10 - 15 working days

This book is about discrete-time, time-homogeneous, Markov chains (Mes) and their ergodic behavior. To this end, most of the material is in fact about stable Mes, by which we mean Mes that admit an invariant probability measure. To state this more precisely and give an overview of the questions we shall be dealing with, we will first introduce some notation and terminology. Let (X,B) be a measurable space, and consider a X-valued Markov chain ~. = {~k' k = 0, 1, ... } with transition probability function (t.pJ.) P(x, B), i.e., P(x, B) := Prob (~k+1 E B I ~k = x) for each x E X, B E B, and k = 0,1, .... The Me ~. is said to be stable if there exists a probability measure (p.m.) /.l on B such that (*) VB EB. /.l(B) = Ix /.l(dx) P(x, B) If (*) holds then /.l is called an invariant p.m. for the Me ~. (or the t.p.f. P).

Further Topics on Discrete-Time Markov Control Processes (Hardcover, 1999 ed.): Onesimo Hernandez-Lerma, Jean B. Lasserre Further Topics on Discrete-Time Markov Control Processes (Hardcover, 1999 ed.)
Onesimo Hernandez-Lerma, Jean B. Lasserre
R4,412 Discovery Miles 44 120 Ships in 10 - 15 working days

Devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes, the text is mainly confined to MCPs with Borel state and control spaces. Although the book follows on from the author's earlier work, an important feature of this volume is that it is self-contained and can thus be read independently of the first.
The control model studied is sufficiently general to include virtually all the usual discrete-time stochastic control models that appear in applications to engineering, economics, mathematical population processes, operations research, and management science.

Discrete-Time Markov Control Processes - Basic Optimality Criteria (Hardcover, 1996 ed.): Onesimo Hernandez-Lerma, Jean B.... Discrete-Time Markov Control Processes - Basic Optimality Criteria (Hardcover, 1996 ed.)
Onesimo Hernandez-Lerma, Jean B. Lasserre
R4,628 Discovery Miles 46 280 Ships in 10 - 15 working days

This book presents the first part of a planned two-volume series devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes (MCPs). Interest is mainly confined to MCPs with Borel state and control (or action) spaces, and possibly unbounded costs and noncompact control constraint sets. MCPs are a class of stochastic control problems, also known as Markov decision processes, controlled Markov processes, or stochastic dynamic pro grams; sometimes, particularly when the state space is a countable set, they are also called Markov decision (or controlled Markov) chains. Regardless of the name used, MCPs appear in many fields, for example, engineering, economics, operations research, statistics, renewable and nonrenewable re source management, (control of) epidemics, etc. However, most of the lit erature (say, at least 90%) is concentrated on MCPs for which (a) the state space is a countable set, and/or (b) the costs-per-stage are bounded, and/or (c) the control constraint sets are compact. But curiously enough, the most widely used control model in engineering and economics--namely the LQ (Linear system/Quadratic cost) model-satisfies none of these conditions. Moreover, when dealing with "partially observable" systems) a standard approach is to transform them into equivalent "completely observable" sys tems in a larger state space (in fact, a space of probability measures), which is uncountable even if the original state process is finite-valued."

Adaptive Markov Control Processes (Hardcover, 1989 ed.): Onesimo Hernandez-Lerma Adaptive Markov Control Processes (Hardcover, 1989 ed.)
Onesimo Hernandez-Lerma
R1,658 Discovery Miles 16 580 Ships in 10 - 15 working days

This book is concerned with a class of discrete-time stochastic control processes known as controlled Markov processes (CMP's), also known as Markov decision processes or Markov dynamic programs. Starting in the mid-1950swith Richard Bellman, many contributions to CMP's have been made, and applications to engineering, statistics and operations research, among other areas, have also been developed. The purpose of this book is to present some recent developments on the theory of adaptive CMP's, i. e., CMP's that depend on unknown parameters. Thus at each decision time, the controller or decision-maker must estimate the true parameter values, and then adapt the control actions to the estimated values. We do not intend to describe all aspects of stochastic adaptive control; rather, the selection of material reflects our own research interests. The prerequisite for this book is a knowledgeof real analysis and prob ability theory at the level of, say, Ash (1972) or Royden (1968), but no previous knowledge of control or decision processes is required. The pre sentation, on the other hand, is meant to beself-contained, in the sensethat whenever a result from analysisor probability is used, it is usually stated in full and references are supplied for further discussion, if necessary. Several appendices are provided for this purpose. The material is divided into six chapters. Chapter 1 contains the basic definitions about the stochastic control problems we are interested in; a brief description of some applications is also provided."

An Introduction to Optimal Control Theory - The Dynamic Programming Approach (Hardcover, 1st ed. 2023): Onesimo... An Introduction to Optimal Control Theory - The Dynamic Programming Approach (Hardcover, 1st ed. 2023)
Onesimo Hernandez-Lerma, Leonardo Ramiro Laura-Guarachi, Saul Mendoza-Palacios, David Gonzalez-Sanchez
R1,648 R1,547 Discovery Miles 15 470 Save R101 (6%) Ships in 9 - 15 working days

This book introduces optimal control problems for large families of deterministic and stochastic systems with discrete or continuous time parameter. These families include most of the systems studied in many disciplines, including Economics, Engineering, Operations Research, and Management Science, among many others. The main objective is to give a concise, systematic, and reasonably self contained presentation of some key topics in optimal control theory. To this end, most of the analyses are based on the dynamic programming (DP) technique. This technique is applicable to almost all control problems that appear in theory and applications. They include, for instance, finite and infinite horizon control problems in which the underlying dynamic system follows either a deterministic or stochastic difference or differential equation. In the infinite horizon case, it also uses DP to study undiscounted problems, such as the ergodic or long-run average cost. After a general introduction to control problems, the book covers the topic dividing into four parts with different dynamical systems: control of discrete-time deterministic systems, discrete-time stochastic systems, ordinary differential equations, and finally a general continuous-time MCP with applications for stochastic differential equations. The first and second part should be accessible to undergraduate students with some knowledge of elementary calculus, linear algebra, and some concepts from probability theory (random variables, expectations, and so forth). Whereas the third and fourth part would be appropriate for advanced undergraduates or graduate students who have a working knowledge of mathematical analysis (derivatives, integrals, ...) and stochastic processes.

Selected Topics On Continuous-time Controlled Markov Chains And Markov Games (Hardcover): Onesimo Hernandez-Lerma, Tomas... Selected Topics On Continuous-time Controlled Markov Chains And Markov Games (Hardcover)
Onesimo Hernandez-Lerma, Tomas Prieto-Rumeau
R2,978 Discovery Miles 29 780 Ships in 10 - 15 working days

This book concerns continuous-time controlled Markov chains, also known as continuous-time Markov decision processes. They form a class of stochastic control problems in which a single decision-maker wishes to optimize a given objective function. This book is also concerned with Markov games, where two decision-makers (or players) try to optimize their own objective function. Both decision-making processes appear in a large number of applications in economics, operations research, engineering, and computer science, among other areas.An extensive, self-contained, up-to-date analysis of basic optimality criteria (such as discounted and average reward), and advanced optimality criteria (e.g., bias, overtaking, sensitive discount, and Blackwell optimality) is presented. A particular emphasis is made on the application of the results herein: algorithmic and computational issues are discussed, and applications to population models and epidemic processes are shown.This book is addressed to students and researchers in the fields of stochastic control and stochastic games. Moreover, it could be of interest also to undergraduate and beginning graduate students because the reader is not supposed to have a high mathematical background: a working knowledge of calculus, linear algebra, probability, and continuous-time Markov chains should suffice to understand the contents of the book.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
LSD
Labrinth, Sia, … CD R213 Discovery Miles 2 130
Corsair Vengeance LPX DDR4 Desktop…
R1,691 R950 Discovery Miles 9 500
Beauty And The Beast - Blu-Ray + DVD
Emma Watson, Dan Stevens, … Blu-ray disc R326 Discovery Miles 3 260
The Sick, The Dying And The Dead
Megadeth CD  (2)
R215 Discovery Miles 2 150
Cable Guy Ikon "Light Up" Deadpool…
R543 Discovery Miles 5 430
Cable Guys Controller and Smartphone…
R399 R349 Discovery Miles 3 490
Kendall Office Chair (Light Grey)
R1,699 R1,346 Discovery Miles 13 460
Taurus Alpatec RCMB 27 - Ceramic Heater…
R1,999 R1,799 Discovery Miles 17 990
Alkaline Battery Size D - 2 Pieces Per…
R129 Discovery Miles 1 290
Docking Edition Multi-Functional…
R1,099 R799 Discovery Miles 7 990

 

Partners