0
Your cart

Your cart is empty

Browse All Departments
  • All Departments
Price
  • R1,000 - R2,500 (5)
  • R2,500 - R5,000 (7)
  • -
Status
Brand

Showing 1 - 12 of 12 matches in All Departments

Further Topics on Discrete-Time Markov Control Processes (Hardcover, 1999 ed.): Onesimo Hernandez-Lerma, Jean B. Lasserre Further Topics on Discrete-Time Markov Control Processes (Hardcover, 1999 ed.)
Onesimo Hernandez-Lerma, Jean B. Lasserre
R3,724 Discovery Miles 37 240 Ships in 10 - 15 working days

Devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes, the text is mainly confined to MCPs with Borel state and control spaces. Although the book follows on from the author's earlier work, an important feature of this volume is that it is self-contained and can thus be read independently of the first.
The control model studied is sufficiently general to include virtually all the usual discrete-time stochastic control models that appear in applications to engineering, economics, mathematical population processes, operations research, and management science.

Discrete-Time Markov Control Processes - Basic Optimality Criteria (Hardcover, 1996 ed.): Onesimo Hernandez-Lerma, Jean B.... Discrete-Time Markov Control Processes - Basic Optimality Criteria (Hardcover, 1996 ed.)
Onesimo Hernandez-Lerma, Jean B. Lasserre
R4,033 Discovery Miles 40 330 Ships in 10 - 15 working days

This book presents the first part of a planned two-volume series devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes (MCPs). Interest is mainly confined to MCPs with Borel state and control (or action) spaces, and possibly unbounded costs and noncompact control constraint sets. MCPs are a class of stochastic control problems, also known as Markov decision processes, controlled Markov processes, or stochastic dynamic pro grams; sometimes, particularly when the state space is a countable set, they are also called Markov decision (or controlled Markov) chains. Regardless of the name used, MCPs appear in many fields, for example, engineering, economics, operations research, statistics, renewable and nonrenewable re source management, (control of) epidemics, etc. However, most of the lit erature (say, at least 90%) is concentrated on MCPs for which (a) the state space is a countable set, and/or (b) the costs-per-stage are bounded, and/or (c) the control constraint sets are compact. But curiously enough, the most widely used control model in engineering and economics--namely the LQ (Linear system/Quadratic cost) model-satisfies none of these conditions. Moreover, when dealing with "partially observable" systems) a standard approach is to transform them into equivalent "completely observable" sys tems in a larger state space (in fact, a space of probability measures), which is uncountable even if the original state process is finite-valued."

Continuous-Time Markov Decision Processes - Theory and Applications (Hardcover, 2009 ed.): Xianping Guo, Onesimo Hernandez-Lerma Continuous-Time Markov Decision Processes - Theory and Applications (Hardcover, 2009 ed.)
Xianping Guo, Onesimo Hernandez-Lerma
R3,011 Discovery Miles 30 110 Ships in 18 - 22 working days

Continuous-time Markov decision processes (MDPs), also known as controlled Markov chains, are used for modeling decision-making problems that arise in operations research (for instance, inventory, manufacturing, and queueing systems), computer science, communications engineering, control of populations (such as fisheries and epidemics), and management science, among many other fields. This volume provides a unified, systematic, self-contained presentation of recent developments on the theory and applications of continuous-time MDPs. The MDPs in this volume include most of the cases that arise in applications, because they allow unbounded transition and reward/cost rates. Much of the material appears for the first time in book form.

Adaptive Markov Control Processes (Hardcover, 1989 ed.): Onesimo Hernandez-Lerma Adaptive Markov Control Processes (Hardcover, 1989 ed.)
Onesimo Hernandez-Lerma
R1,494 Discovery Miles 14 940 Ships in 18 - 22 working days

This book is concerned with a class of discrete-time stochastic control processes known as controlled Markov processes (CMP's), also known as Markov decision processes or Markov dynamic programs. Starting in the mid-1950swith Richard Bellman, many contributions to CMP's have been made, and applications to engineering, statistics and operations research, among other areas, have also been developed. The purpose of this book is to present some recent developments on the theory of adaptive CMP's, i. e., CMP's that depend on unknown parameters. Thus at each decision time, the controller or decision-maker must estimate the true parameter values, and then adapt the control actions to the estimated values. We do not intend to describe all aspects of stochastic adaptive control; rather, the selection of material reflects our own research interests. The prerequisite for this book is a knowledgeof real analysis and prob ability theory at the level of, say, Ash (1972) or Royden (1968), but no previous knowledge of control or decision processes is required. The pre sentation, on the other hand, is meant to beself-contained, in the sensethat whenever a result from analysisor probability is used, it is usually stated in full and references are supplied for further discussion, if necessary. Several appendices are provided for this purpose. The material is divided into six chapters. Chapter 1 contains the basic definitions about the stochastic control problems we are interested in; a brief description of some applications is also provided."

Markov Chains and Invariant Probabilities (Hardcover, 2003 ed.): Onesimo Hernandez-Lerma, Jean B. Lasserre Markov Chains and Invariant Probabilities (Hardcover, 2003 ed.)
Onesimo Hernandez-Lerma, Jean B. Lasserre
R1,525 Discovery Miles 15 250 Ships in 18 - 22 working days

This book is about discrete-time, time-homogeneous, Markov chains (Mes) and their ergodic behavior. To this end, most of the material is in fact about stable Mes, by which we mean Mes that admit an invariant probability measure. To state this more precisely and give an overview of the questions we shall be dealing with, we will first introduce some notation and terminology. Let (X,B) be a measurable space, and consider a X-valued Markov chain ~. = {~k' k = 0, 1, ... } with transition probability function (t.pJ.) P(x, B), i.e., P(x, B) := Prob (~k+1 E B I ~k = x) for each x E X, B E B, and k = 0,1, .... The Me ~. is said to be stable if there exists a probability measure (p.m.) /.l on B such that (*) VB EB. /.l(B) = Ix /.l(dx) P(x, B) If (*) holds then /.l is called an invariant p.m. for the Me ~. (or the t.p.f. P).

Selected Topics On Continuous-time Controlled Markov Chains And Markov Games (Hardcover): Onesimo Hernandez-Lerma, Tomas... Selected Topics On Continuous-time Controlled Markov Chains And Markov Games (Hardcover)
Onesimo Hernandez-Lerma, Tomas Prieto-Rumeau
R2,671 Discovery Miles 26 710 Ships in 18 - 22 working days

This book concerns continuous-time controlled Markov chains, also known as continuous-time Markov decision processes. They form a class of stochastic control problems in which a single decision-maker wishes to optimize a given objective function. This book is also concerned with Markov games, where two decision-makers (or players) try to optimize their own objective function. Both decision-making processes appear in a large number of applications in economics, operations research, engineering, and computer science, among other areas.An extensive, self-contained, up-to-date analysis of basic optimality criteria (such as discounted and average reward), and advanced optimality criteria (e.g., bias, overtaking, sensitive discount, and Blackwell optimality) is presented. A particular emphasis is made on the application of the results herein: algorithmic and computational issues are discussed, and applications to population models and epidemic processes are shown.This book is addressed to students and researchers in the fields of stochastic control and stochastic games. Moreover, it could be of interest also to undergraduate and beginning graduate students because the reader is not supposed to have a high mathematical background: a working knowledge of calculus, linear algebra, probability, and continuous-time Markov chains should suffice to understand the contents of the book.

Markov Chains and Invariant Probabilities (Paperback, Softcover reprint of the original 1st ed. 2003): Onesimo Hernandez-Lerma,... Markov Chains and Invariant Probabilities (Paperback, Softcover reprint of the original 1st ed. 2003)
Onesimo Hernandez-Lerma, Jean B. Lasserre
R1,390 Discovery Miles 13 900 Ships in 18 - 22 working days

This book is about discrete-time, time-homogeneous, Markov chains (Mes) and their ergodic behavior. To this end, most of the material is in fact about stable Mes, by which we mean Mes that admit an invariant probability measure. To state this more precisely and give an overview of the questions we shall be dealing with, we will first introduce some notation and terminology. Let (X,B) be a measurable space, and consider a X-valued Markov chain ~. = {~k' k = 0, 1, ... } with transition probability function (t.pJ.) P(x, B), i.e., P(x, B) := Prob (~k+1 E B I ~k = x) for each x E X, B E B, and k = 0,1, .... The Me ~. is said to be stable if there exists a probability measure (p.m.) /.l on B such that (*) VB EB. /.l(B) = Ix /.l(dx) P(x, B) If (*) holds then /.l is called an invariant p.m. for the Me ~. (or the t.p.f. P).

Further Topics on Discrete-Time Markov Control Processes (Paperback, Softcover reprint of the original 1st ed. 1999): Onesimo... Further Topics on Discrete-Time Markov Control Processes (Paperback, Softcover reprint of the original 1st ed. 1999)
Onesimo Hernandez-Lerma, Jean B. Lasserre
R3,785 Discovery Miles 37 850 Ships in 18 - 22 working days

Devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes, the text is mainly confined to MCPs with Borel state and control spaces. Although the book follows on from the author's earlier work, an important feature of this volume is that it is self-contained and can thus be read independently of the first. The control model studied is sufficiently general to include virtually all the usual discrete-time stochastic control models that appear in applications to engineering, economics, mathematical population processes, operations research, and management science.

Discrete-Time Stochastic Control and Dynamic Potential Games - The Euler-Equation Approach (Paperback, 2013 ed.): David... Discrete-Time Stochastic Control and Dynamic Potential Games - The Euler-Equation Approach (Paperback, 2013 ed.)
David Gonzalez-Sanchez, Onesimo Hernandez-Lerma
R1,496 Discovery Miles 14 960 Ships in 18 - 22 working days

There are several techniques to study noncooperative dynamic games, such as dynamic programming and the maximum principle (also called the Lagrange method). It turns out, however, that one way to characterize dynamic potential games requires to analyze inverse optimal control problems, and it is here where the Euler equation approach comes in because it is particularly well-suited to solve inverse problems. Despite the importance of dynamic potential games, there is no systematic study about them. This monograph is the first attempt to provide a systematic, self-contained presentation of stochastic dynamic potential games.

Continuous-Time Markov Decision Processes - Theory and Applications (Paperback, 2009 ed.): Xianping Guo, Onesimo Hernandez-Lerma Continuous-Time Markov Decision Processes - Theory and Applications (Paperback, 2009 ed.)
Xianping Guo, Onesimo Hernandez-Lerma
R2,879 Discovery Miles 28 790 Ships in 18 - 22 working days

Continuous-time Markov decision processes (MDPs), also known as controlled Markov chains, are used for modeling decision-making problems that arise in operations research (for instance, inventory, manufacturing, and queueing systems), computer science, communications engineering, control of populations (such as fisheries and epidemics), and management science, among many other fields. This volume provides a unified, systematic, self-contained presentation of recent developments on the theory and applications of continuous-time MDPs. The MDPs in this volume include most of the cases that arise in applications, because they allow unbounded transition and reward/cost rates. Much of the material appears for the first time in book form.

Discrete-Time Markov Control Processes - Basic Optimality Criteria (Paperback, Softcover reprint of the original 1st ed. 1996):... Discrete-Time Markov Control Processes - Basic Optimality Criteria (Paperback, Softcover reprint of the original 1st ed. 1996)
Onesimo Hernandez-Lerma, Jean B. Lasserre
R3,996 Discovery Miles 39 960 Ships in 18 - 22 working days

This book presents the first part of a planned two-volume series devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes (MCPs). Interest is mainly confined to MCPs with Borel state and control (or action) spaces, and possibly unbounded costs and noncompact control constraint sets. MCPs are a class of stochastic control problems, also known as Markov decision processes, controlled Markov processes, or stochastic dynamic pro grams; sometimes, particularly when the state space is a countable set, they are also called Markov decision (or controlled Markov) chains. Regardless of the name used, MCPs appear in many fields, for example, engineering, economics, operations research, statistics, renewable and nonrenewable re source management, (control of) epidemics, etc. However, most of the lit erature (say, at least 90%) is concentrated on MCPs for which (a) the state space is a countable set, and/or (b) the costs-per-stage are bounded, and/or (c) the control constraint sets are compact. But curiously enough, the most widely used control model in engineering and economics--namely the LQ (Linear system/Quadratic cost) model-satisfies none of these conditions. Moreover, when dealing with "partially observable" systems) a standard approach is to transform them into equivalent "completely observable" sys tems in a larger state space (in fact, a space of probability measures), which is uncountable even if the original state process is finite-valued.

Adaptive Markov Control Processes (Paperback, Softcover reprint of the original 1st ed. 1989): Onesimo Hernandez-Lerma Adaptive Markov Control Processes (Paperback, Softcover reprint of the original 1st ed. 1989)
Onesimo Hernandez-Lerma
R1,374 Discovery Miles 13 740 Ships in 18 - 22 working days

This book is concerned with a class of discrete-time stochastic control processes known as controlled Markov processes (CMP's), also known as Markov decision processes or Markov dynamic programs. Starting in the mid-1950swith Richard Bellman, many contributions to CMP's have been made, and applications to engineering, statistics and operations research, among other areas, have also been developed. The purpose of this book is to present some recent developments on the theory of adaptive CMP's, i. e., CMP's that depend on unknown parameters. Thus at each decision time, the controller or decision-maker must estimate the true parameter values, and then adapt the control actions to the estimated values. We do not intend to describe all aspects of stochastic adaptive control; rather, the selection of material reflects our own research interests. The prerequisite for this book is a knowledgeof real analysis and prob ability theory at the level of, say, Ash (1972) or Royden (1968), but no previous knowledge of control or decision processes is required. The pre sentation, on the other hand, is meant to beself-contained, in the sensethat whenever a result from analysisor probability is used, it is usually stated in full and references are supplied for further discussion, if necessary. Several appendices are provided for this purpose. The material is divided into six chapters. Chapter 1 contains the basic definitions about the stochastic control problems we are interested in; a brief description of some applications is also provided."

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
The Dreamer
Dian Layton Hardcover R491 Discovery Miles 4 910
Parallel Computing Using Optical…
Keqin Li, Yi Pan, … Hardcover R4,175 Discovery Miles 41 750
Karl Marx and the Anarchists Library…
Paul Thomas Hardcover R5,795 Discovery Miles 57 950
Mark Bible Study Guide plus Streaming…
Jeff Manion Paperback R425 R389 Discovery Miles 3 890
Basic Instinct 2
Sharon Stone, David Morrissey, … DVD R436 R259 Discovery Miles 2 590
The Ministry We Need - Three Inaugural…
Samuel Hanson Cox Paperback R460 Discovery Miles 4 600
Tron / Tron: Legacy
Jeff Bridges, Bruce Boxleitner, … Blu-ray disc  (2)
R453 Discovery Miles 4 530
Plato's Rivalry with Medicine - A…
Susan B. Levin Hardcover R2,443 Discovery Miles 24 430
Christian Higher Education - An…
Christopher Toote Hardcover R1,793 R1,446 Discovery Miles 14 460
Ethics, Prevention, and Public Health
Angus Dawson, Marcel Verweij Hardcover R3,015 Discovery Miles 30 150

 

Partners