0
Your cart

Your cart is empty

Browse All Departments
  • All Departments
Price
  • R1,000 - R2,500 (6)
  • R2,500 - R5,000 (7)
  • -
Status
Brand

Showing 1 - 13 of 13 matches in All Departments

Further Topics on Discrete-Time Markov Control Processes (Hardcover, 1999 ed.): Onesimo Hernandez-Lerma, Jean B. Lasserre Further Topics on Discrete-Time Markov Control Processes (Hardcover, 1999 ed.)
Onesimo Hernandez-Lerma, Jean B. Lasserre
R3,772 Discovery Miles 37 720 Ships in 12 - 17 working days

Devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes, the text is mainly confined to MCPs with Borel state and control spaces. Although the book follows on from the author's earlier work, an important feature of this volume is that it is self-contained and can thus be read independently of the first.
The control model studied is sufficiently general to include virtually all the usual discrete-time stochastic control models that appear in applications to engineering, economics, mathematical population processes, operations research, and management science.

Discrete-Time Markov Control Processes - Basic Optimality Criteria (Hardcover, 1996 ed.): Onesimo Hernandez-Lerma, Jean B.... Discrete-Time Markov Control Processes - Basic Optimality Criteria (Hardcover, 1996 ed.)
Onesimo Hernandez-Lerma, Jean B. Lasserre
R4,087 Discovery Miles 40 870 Ships in 12 - 17 working days

This book presents the first part of a planned two-volume series devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes (MCPs). Interest is mainly confined to MCPs with Borel state and control (or action) spaces, and possibly unbounded costs and noncompact control constraint sets. MCPs are a class of stochastic control problems, also known as Markov decision processes, controlled Markov processes, or stochastic dynamic pro grams; sometimes, particularly when the state space is a countable set, they are also called Markov decision (or controlled Markov) chains. Regardless of the name used, MCPs appear in many fields, for example, engineering, economics, operations research, statistics, renewable and nonrenewable re source management, (control of) epidemics, etc. However, most of the lit erature (say, at least 90%) is concentrated on MCPs for which (a) the state space is a countable set, and/or (b) the costs-per-stage are bounded, and/or (c) the control constraint sets are compact. But curiously enough, the most widely used control model in engineering and economics--namely the LQ (Linear system/Quadratic cost) model-satisfies none of these conditions. Moreover, when dealing with "partially observable" systems) a standard approach is to transform them into equivalent "completely observable" sys tems in a larger state space (in fact, a space of probability measures), which is uncountable even if the original state process is finite-valued."

Continuous-Time Markov Decision Processes - Theory and Applications (Hardcover, 2009 ed.): Xianping Guo, Onesimo Hernandez-Lerma Continuous-Time Markov Decision Processes - Theory and Applications (Hardcover, 2009 ed.)
Xianping Guo, Onesimo Hernandez-Lerma
R3,105 Discovery Miles 31 050 Ships in 12 - 17 working days

Continuous-time Markov decision processes (MDPs), also known as controlled Markov chains, are used for modeling decision-making problems that arise in operations research (for instance, inventory, manufacturing, and queueing systems), computer science, communications engineering, control of populations (such as fisheries and epidemics), and management science, among many other fields. This volume provides a unified, systematic, self-contained presentation of recent developments on the theory and applications of continuous-time MDPs. The MDPs in this volume include most of the cases that arise in applications, because they allow unbounded transition and reward/cost rates. Much of the material appears for the first time in book form.

Adaptive Markov Control Processes (Hardcover, 1989 ed.): Onesimo Hernandez-Lerma Adaptive Markov Control Processes (Hardcover, 1989 ed.)
Onesimo Hernandez-Lerma
R1,563 Discovery Miles 15 630 Ships in 10 - 15 working days

This book is concerned with a class of discrete-time stochastic control processes known as controlled Markov processes (CMP's), also known as Markov decision processes or Markov dynamic programs. Starting in the mid-1950swith Richard Bellman, many contributions to CMP's have been made, and applications to engineering, statistics and operations research, among other areas, have also been developed. The purpose of this book is to present some recent developments on the theory of adaptive CMP's, i. e., CMP's that depend on unknown parameters. Thus at each decision time, the controller or decision-maker must estimate the true parameter values, and then adapt the control actions to the estimated values. We do not intend to describe all aspects of stochastic adaptive control; rather, the selection of material reflects our own research interests. The prerequisite for this book is a knowledgeof real analysis and prob ability theory at the level of, say, Ash (1972) or Royden (1968), but no previous knowledge of control or decision processes is required. The pre sentation, on the other hand, is meant to beself-contained, in the sensethat whenever a result from analysisor probability is used, it is usually stated in full and references are supplied for further discussion, if necessary. Several appendices are provided for this purpose. The material is divided into six chapters. Chapter 1 contains the basic definitions about the stochastic control problems we are interested in; a brief description of some applications is also provided."

Markov Chains and Invariant Probabilities (Hardcover, 2003 ed.): Onesimo Hernandez-Lerma, Jean B. Lasserre Markov Chains and Invariant Probabilities (Hardcover, 2003 ed.)
Onesimo Hernandez-Lerma, Jean B. Lasserre
R1,596 Discovery Miles 15 960 Ships in 10 - 15 working days

This book is about discrete-time, time-homogeneous, Markov chains (Mes) and their ergodic behavior. To this end, most of the material is in fact about stable Mes, by which we mean Mes that admit an invariant probability measure. To state this more precisely and give an overview of the questions we shall be dealing with, we will first introduce some notation and terminology. Let (X,B) be a measurable space, and consider a X-valued Markov chain ~. = {~k' k = 0, 1, ... } with transition probability function (t.pJ.) P(x, B), i.e., P(x, B) := Prob (~k+1 E B I ~k = x) for each x E X, B E B, and k = 0,1, .... The Me ~. is said to be stable if there exists a probability measure (p.m.) /.l on B such that (*) VB EB. /.l(B) = Ix /.l(dx) P(x, B) If (*) holds then /.l is called an invariant p.m. for the Me ~. (or the t.p.f. P).

Selected Topics On Continuous-time Controlled Markov Chains And Markov Games (Hardcover): Onesimo Hernandez-Lerma, Tomas... Selected Topics On Continuous-time Controlled Markov Chains And Markov Games (Hardcover)
Onesimo Hernandez-Lerma, Tomas Prieto-Rumeau
R2,717 Discovery Miles 27 170 Ships in 12 - 17 working days

This book concerns continuous-time controlled Markov chains, also known as continuous-time Markov decision processes. They form a class of stochastic control problems in which a single decision-maker wishes to optimize a given objective function. This book is also concerned with Markov games, where two decision-makers (or players) try to optimize their own objective function. Both decision-making processes appear in a large number of applications in economics, operations research, engineering, and computer science, among other areas.An extensive, self-contained, up-to-date analysis of basic optimality criteria (such as discounted and average reward), and advanced optimality criteria (e.g., bias, overtaking, sensitive discount, and Blackwell optimality) is presented. A particular emphasis is made on the application of the results herein: algorithmic and computational issues are discussed, and applications to population models and epidemic processes are shown.This book is addressed to students and researchers in the fields of stochastic control and stochastic games. Moreover, it could be of interest also to undergraduate and beginning graduate students because the reader is not supposed to have a high mathematical background: a working knowledge of calculus, linear algebra, probability, and continuous-time Markov chains should suffice to understand the contents of the book.

Discrete-Time Stochastic Control and Dynamic Potential Games - The Euler-Equation Approach (Paperback, 2013 ed.): David... Discrete-Time Stochastic Control and Dynamic Potential Games - The Euler-Equation Approach (Paperback, 2013 ed.)
David Gonzalez-Sanchez, Onesimo Hernandez-Lerma
R1,568 Discovery Miles 15 680 Ships in 10 - 15 working days

There are several techniques to study noncooperative dynamic games, such as dynamic programming and the maximum principle (also called the Lagrange method). It turns out, however, that one way to characterize dynamic potential games requires to analyze inverse optimal control problems, and it is here where the Euler equation approach comes in because it is particularly well-suited to solve inverse problems. Despite the importance of dynamic potential games, there is no systematic study about them. This monograph is the first attempt to provide a systematic, self-contained presentation of stochastic dynamic potential games.

Markov Chains and Invariant Probabilities (Paperback, Softcover reprint of the original 1st ed. 2003): Onesimo Hernandez-Lerma,... Markov Chains and Invariant Probabilities (Paperback, Softcover reprint of the original 1st ed. 2003)
Onesimo Hernandez-Lerma, Jean B. Lasserre
R1,450 Discovery Miles 14 500 Ships in 10 - 15 working days

This book is about discrete-time, time-homogeneous, Markov chains (Mes) and their ergodic behavior. To this end, most of the material is in fact about stable Mes, by which we mean Mes that admit an invariant probability measure. To state this more precisely and give an overview of the questions we shall be dealing with, we will first introduce some notation and terminology. Let (X,B) be a measurable space, and consider a X-valued Markov chain ~. = {~k' k = 0, 1, ... } with transition probability function (t.pJ.) P(x, B), i.e., P(x, B) := Prob (~k+1 E B I ~k = x) for each x E X, B E B, and k = 0,1, .... The Me ~. is said to be stable if there exists a probability measure (p.m.) /.l on B such that (*) VB EB. /.l(B) = Ix /.l(dx) P(x, B) If (*) holds then /.l is called an invariant p.m. for the Me ~. (or the t.p.f. P).

Further Topics on Discrete-Time Markov Control Processes (Paperback, Softcover reprint of the original 1st ed. 1999): Onesimo... Further Topics on Discrete-Time Markov Control Processes (Paperback, Softcover reprint of the original 1st ed. 1999)
Onesimo Hernandez-Lerma, Jean B. Lasserre
R3,988 Discovery Miles 39 880 Ships in 10 - 15 working days

Devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes, the text is mainly confined to MCPs with Borel state and control spaces. Although the book follows on from the author's earlier work, an important feature of this volume is that it is self-contained and can thus be read independently of the first. The control model studied is sufficiently general to include virtually all the usual discrete-time stochastic control models that appear in applications to engineering, economics, mathematical population processes, operations research, and management science.

Discrete-Time Markov Control Processes - Basic Optimality Criteria (Paperback, Softcover reprint of the original 1st ed. 1996):... Discrete-Time Markov Control Processes - Basic Optimality Criteria (Paperback, Softcover reprint of the original 1st ed. 1996)
Onesimo Hernandez-Lerma, Jean B. Lasserre
R4,211 Discovery Miles 42 110 Ships in 10 - 15 working days

This book presents the first part of a planned two-volume series devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes (MCPs). Interest is mainly confined to MCPs with Borel state and control (or action) spaces, and possibly unbounded costs and noncompact control constraint sets. MCPs are a class of stochastic control problems, also known as Markov decision processes, controlled Markov processes, or stochastic dynamic pro grams; sometimes, particularly when the state space is a countable set, they are also called Markov decision (or controlled Markov) chains. Regardless of the name used, MCPs appear in many fields, for example, engineering, economics, operations research, statistics, renewable and nonrenewable re source management, (control of) epidemics, etc. However, most of the lit erature (say, at least 90%) is concentrated on MCPs for which (a) the state space is a countable set, and/or (b) the costs-per-stage are bounded, and/or (c) the control constraint sets are compact. But curiously enough, the most widely used control model in engineering and economics--namely the LQ (Linear system/Quadratic cost) model-satisfies none of these conditions. Moreover, when dealing with "partially observable" systems) a standard approach is to transform them into equivalent "completely observable" sys tems in a larger state space (in fact, a space of probability measures), which is uncountable even if the original state process is finite-valued.

Adaptive Markov Control Processes (Paperback, Softcover reprint of the original 1st ed. 1989): Onesimo Hernandez-Lerma Adaptive Markov Control Processes (Paperback, Softcover reprint of the original 1st ed. 1989)
Onesimo Hernandez-Lerma
R1,432 Discovery Miles 14 320 Ships in 10 - 15 working days

This book is concerned with a class of discrete-time stochastic control processes known as controlled Markov processes (CMP's), also known as Markov decision processes or Markov dynamic programs. Starting in the mid-1950swith Richard Bellman, many contributions to CMP's have been made, and applications to engineering, statistics and operations research, among other areas, have also been developed. The purpose of this book is to present some recent developments on the theory of adaptive CMP's, i. e., CMP's that depend on unknown parameters. Thus at each decision time, the controller or decision-maker must estimate the true parameter values, and then adapt the control actions to the estimated values. We do not intend to describe all aspects of stochastic adaptive control; rather, the selection of material reflects our own research interests. The prerequisite for this book is a knowledgeof real analysis and prob ability theory at the level of, say, Ash (1972) or Royden (1968), but no previous knowledge of control or decision processes is required. The pre sentation, on the other hand, is meant to beself-contained, in the sensethat whenever a result from analysisor probability is used, it is usually stated in full and references are supplied for further discussion, if necessary. Several appendices are provided for this purpose. The material is divided into six chapters. Chapter 1 contains the basic definitions about the stochastic control problems we are interested in; a brief description of some applications is also provided."

Continuous-Time Markov Decision Processes - Theory and Applications (Paperback, 2009 ed.): Xianping Guo, Onesimo Hernandez-Lerma Continuous-Time Markov Decision Processes - Theory and Applications (Paperback, 2009 ed.)
Xianping Guo, Onesimo Hernandez-Lerma
R3,028 Discovery Miles 30 280 Ships in 10 - 15 working days

Continuous-time Markov decision processes (MDPs), also known as controlled Markov chains, are used for modeling decision-making problems that arise in operations research (for instance, inventory, manufacturing, and queueing systems), computer science, communications engineering, control of populations (such as fisheries and epidemics), and management science, among many other fields. This volume provides a unified, systematic, self-contained presentation of recent developments on the theory and applications of continuous-time MDPs. The MDPs in this volume include most of the cases that arise in applications, because they allow unbounded transition and reward/cost rates. Much of the material appears for the first time in book form.

An Introduction to Optimal Control Theory - The Dynamic Programming Approach (Hardcover, 1st ed. 2023): Onesimo... An Introduction to Optimal Control Theory - The Dynamic Programming Approach (Hardcover, 1st ed. 2023)
Onesimo Hernandez-Lerma, Leonardo Ramiro Laura-Guarachi, Saul Mendoza-Palacios, David Gonzalez-Sanchez
R1,584 R1,489 Discovery Miles 14 890 Save R95 (6%) Ships in 9 - 15 working days

This book introduces optimal control problems for large families of deterministic and stochastic systems with discrete or continuous time parameter. These families include most of the systems studied in many disciplines, including Economics, Engineering, Operations Research, and Management Science, among many others. The main objective is to give a concise, systematic, and reasonably self contained presentation of some key topics in optimal control theory. To this end, most of the analyses are based on the dynamic programming (DP) technique. This technique is applicable to almost all control problems that appear in theory and applications. They include, for instance, finite and infinite horizon control problems in which the underlying dynamic system follows either a deterministic or stochastic difference or differential equation. In the infinite horizon case, it also uses DP to study undiscounted problems, such as the ergodic or long-run average cost. After a general introduction to control problems, the book covers the topic dividing into four parts with different dynamical systems: control of discrete-time deterministic systems, discrete-time stochastic systems, ordinary differential equations, and finally a general continuous-time MCP with applications for stochastic differential equations. The first and second part should be accessible to undergraduate students with some knowledge of elementary calculus, linear algebra, and some concepts from probability theory (random variables, expectations, and so forth). Whereas the third and fourth part would be appropriate for advanced undergraduates or graduate students who have a working knowledge of mathematical analysis (derivatives, integrals, ...) and stochastic processes.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Sudocrem Skin & Baby Care Barrier Cream…
R210 Discovery Miles 2 100
Hask Keratin Protein Smoothing Shine Oil…
R90 Discovery Miles 900
3 Layer Fabric Face Mask (Blue)
R15 Discovery Miles 150
Polaroid Fit Active Watch (Black)
R760 Discovery Miles 7 600
Dunlop Pro Padel Balls (Green)(Pack of…
R199 R165 Discovery Miles 1 650
Home Classix Placemats - The Tropics…
R59 R51 Discovery Miles 510
Card Games - 52 Of The World's Best Card…
Sara Harper Hardcover  (1)
R303 Discovery Miles 3 030
Loot
Nadine Gordimer Paperback  (2)
R383 R318 Discovery Miles 3 180
Coolaroo Elevated Pet Bed (L)(Brunswick…
R990 Discovery Miles 9 900
Angelcare Nappy Bin Refills
R165 R145 Discovery Miles 1 450

 

Partners