0
Your cart

Your cart is empty

Browse All Departments
  • All Departments
Price
  • R1,000 - R2,500 (5)
  • R2,500 - R5,000 (7)
  • -
Status
Brand

Showing 1 - 12 of 12 matches in All Departments

Further Topics on Discrete-Time Markov Control Processes (Hardcover, 1999 ed.): Onesimo Hernandez-Lerma, Jean B. Lasserre Further Topics on Discrete-Time Markov Control Processes (Hardcover, 1999 ed.)
Onesimo Hernandez-Lerma, Jean B. Lasserre
R3,801 Discovery Miles 38 010 Ships in 12 - 17 working days

Devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes, the text is mainly confined to MCPs with Borel state and control spaces. Although the book follows on from the author's earlier work, an important feature of this volume is that it is self-contained and can thus be read independently of the first.
The control model studied is sufficiently general to include virtually all the usual discrete-time stochastic control models that appear in applications to engineering, economics, mathematical population processes, operations research, and management science.

Discrete-Time Markov Control Processes - Basic Optimality Criteria (Hardcover, 1996 ed.): Onesimo Hernandez-Lerma, Jean B.... Discrete-Time Markov Control Processes - Basic Optimality Criteria (Hardcover, 1996 ed.)
Onesimo Hernandez-Lerma, Jean B. Lasserre
R4,117 Discovery Miles 41 170 Ships in 12 - 17 working days

This book presents the first part of a planned two-volume series devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes (MCPs). Interest is mainly confined to MCPs with Borel state and control (or action) spaces, and possibly unbounded costs and noncompact control constraint sets. MCPs are a class of stochastic control problems, also known as Markov decision processes, controlled Markov processes, or stochastic dynamic pro grams; sometimes, particularly when the state space is a countable set, they are also called Markov decision (or controlled Markov) chains. Regardless of the name used, MCPs appear in many fields, for example, engineering, economics, operations research, statistics, renewable and nonrenewable re source management, (control of) epidemics, etc. However, most of the lit erature (say, at least 90%) is concentrated on MCPs for which (a) the state space is a countable set, and/or (b) the costs-per-stage are bounded, and/or (c) the control constraint sets are compact. But curiously enough, the most widely used control model in engineering and economics--namely the LQ (Linear system/Quadratic cost) model-satisfies none of these conditions. Moreover, when dealing with "partially observable" systems) a standard approach is to transform them into equivalent "completely observable" sys tems in a larger state space (in fact, a space of probability measures), which is uncountable even if the original state process is finite-valued."

Continuous-Time Markov Decision Processes - Theory and Applications (Hardcover, 2009 ed.): Xianping Guo, Onesimo Hernandez-Lerma Continuous-Time Markov Decision Processes - Theory and Applications (Hardcover, 2009 ed.)
Xianping Guo, Onesimo Hernandez-Lerma
R3,133 Discovery Miles 31 330 Ships in 12 - 17 working days

Continuous-time Markov decision processes (MDPs), also known as controlled Markov chains, are used for modeling decision-making problems that arise in operations research (for instance, inventory, manufacturing, and queueing systems), computer science, communications engineering, control of populations (such as fisheries and epidemics), and management science, among many other fields. This volume provides a unified, systematic, self-contained presentation of recent developments on the theory and applications of continuous-time MDPs. The MDPs in this volume include most of the cases that arise in applications, because they allow unbounded transition and reward/cost rates. Much of the material appears for the first time in book form.

Adaptive Markov Control Processes (Hardcover, 1989 ed.): Onesimo Hernandez-Lerma Adaptive Markov Control Processes (Hardcover, 1989 ed.)
Onesimo Hernandez-Lerma
R1,615 Discovery Miles 16 150 Ships in 10 - 15 working days

This book is concerned with a class of discrete-time stochastic control processes known as controlled Markov processes (CMP's), also known as Markov decision processes or Markov dynamic programs. Starting in the mid-1950swith Richard Bellman, many contributions to CMP's have been made, and applications to engineering, statistics and operations research, among other areas, have also been developed. The purpose of this book is to present some recent developments on the theory of adaptive CMP's, i. e., CMP's that depend on unknown parameters. Thus at each decision time, the controller or decision-maker must estimate the true parameter values, and then adapt the control actions to the estimated values. We do not intend to describe all aspects of stochastic adaptive control; rather, the selection of material reflects our own research interests. The prerequisite for this book is a knowledgeof real analysis and prob ability theory at the level of, say, Ash (1972) or Royden (1968), but no previous knowledge of control or decision processes is required. The pre sentation, on the other hand, is meant to beself-contained, in the sensethat whenever a result from analysisor probability is used, it is usually stated in full and references are supplied for further discussion, if necessary. Several appendices are provided for this purpose. The material is divided into six chapters. Chapter 1 contains the basic definitions about the stochastic control problems we are interested in; a brief description of some applications is also provided."

Markov Chains and Invariant Probabilities (Hardcover, 2003 ed.): Onesimo Hernandez-Lerma, Jean B. Lasserre Markov Chains and Invariant Probabilities (Hardcover, 2003 ed.)
Onesimo Hernandez-Lerma, Jean B. Lasserre
R1,644 Discovery Miles 16 440 Ships in 12 - 17 working days

This book is about discrete-time, time-homogeneous, Markov chains (Mes) and their ergodic behavior. To this end, most of the material is in fact about stable Mes, by which we mean Mes that admit an invariant probability measure. To state this more precisely and give an overview of the questions we shall be dealing with, we will first introduce some notation and terminology. Let (X,B) be a measurable space, and consider a X-valued Markov chain ~. = {~k' k = 0, 1, ... } with transition probability function (t.pJ.) P(x, B), i.e., P(x, B) := Prob (~k+1 E B I ~k = x) for each x E X, B E B, and k = 0,1, .... The Me ~. is said to be stable if there exists a probability measure (p.m.) /.l on B such that (*) VB EB. /.l(B) = Ix /.l(dx) P(x, B) If (*) holds then /.l is called an invariant p.m. for the Me ~. (or the t.p.f. P).

Selected Topics On Continuous-time Controlled Markov Chains And Markov Games (Hardcover): Onesimo Hernandez-Lerma, Tomas... Selected Topics On Continuous-time Controlled Markov Chains And Markov Games (Hardcover)
Onesimo Hernandez-Lerma, Tomas Prieto-Rumeau
R2,744 Discovery Miles 27 440 Ships in 12 - 17 working days

This book concerns continuous-time controlled Markov chains, also known as continuous-time Markov decision processes. They form a class of stochastic control problems in which a single decision-maker wishes to optimize a given objective function. This book is also concerned with Markov games, where two decision-makers (or players) try to optimize their own objective function. Both decision-making processes appear in a large number of applications in economics, operations research, engineering, and computer science, among other areas.An extensive, self-contained, up-to-date analysis of basic optimality criteria (such as discounted and average reward), and advanced optimality criteria (e.g., bias, overtaking, sensitive discount, and Blackwell optimality) is presented. A particular emphasis is made on the application of the results herein: algorithmic and computational issues are discussed, and applications to population models and epidemic processes are shown.This book is addressed to students and researchers in the fields of stochastic control and stochastic games. Moreover, it could be of interest also to undergraduate and beginning graduate students because the reader is not supposed to have a high mathematical background: a working knowledge of calculus, linear algebra, probability, and continuous-time Markov chains should suffice to understand the contents of the book.

Discrete-Time Stochastic Control and Dynamic Potential Games - The Euler-Equation Approach (Paperback, 2013 ed.): David... Discrete-Time Stochastic Control and Dynamic Potential Games - The Euler-Equation Approach (Paperback, 2013 ed.)
David Gonzalez-Sanchez, Onesimo Hernandez-Lerma
R1,618 Discovery Miles 16 180 Ships in 10 - 15 working days

There are several techniques to study noncooperative dynamic games, such as dynamic programming and the maximum principle (also called the Lagrange method). It turns out, however, that one way to characterize dynamic potential games requires to analyze inverse optimal control problems, and it is here where the Euler equation approach comes in because it is particularly well-suited to solve inverse problems. Despite the importance of dynamic potential games, there is no systematic study about them. This monograph is the first attempt to provide a systematic, self-contained presentation of stochastic dynamic potential games.

Markov Chains and Invariant Probabilities (Paperback, Softcover reprint of the original 1st ed. 2003): Onesimo Hernandez-Lerma,... Markov Chains and Invariant Probabilities (Paperback, Softcover reprint of the original 1st ed. 2003)
Onesimo Hernandez-Lerma, Jean B. Lasserre
R1,502 Discovery Miles 15 020 Ships in 10 - 15 working days

This book is about discrete-time, time-homogeneous, Markov chains (Mes) and their ergodic behavior. To this end, most of the material is in fact about stable Mes, by which we mean Mes that admit an invariant probability measure. To state this more precisely and give an overview of the questions we shall be dealing with, we will first introduce some notation and terminology. Let (X,B) be a measurable space, and consider a X-valued Markov chain ~. = {~k' k = 0, 1, ... } with transition probability function (t.pJ.) P(x, B), i.e., P(x, B) := Prob (~k+1 E B I ~k = x) for each x E X, B E B, and k = 0,1, .... The Me ~. is said to be stable if there exists a probability measure (p.m.) /.l on B such that (*) VB EB. /.l(B) = Ix /.l(dx) P(x, B) If (*) holds then /.l is called an invariant p.m. for the Me ~. (or the t.p.f. P).

Further Topics on Discrete-Time Markov Control Processes (Paperback, Softcover reprint of the original 1st ed. 1999): Onesimo... Further Topics on Discrete-Time Markov Control Processes (Paperback, Softcover reprint of the original 1st ed. 1999)
Onesimo Hernandez-Lerma, Jean B. Lasserre
R4,103 Discovery Miles 41 030 Ships in 10 - 15 working days

Devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes, the text is mainly confined to MCPs with Borel state and control spaces. Although the book follows on from the author's earlier work, an important feature of this volume is that it is self-contained and can thus be read independently of the first. The control model studied is sufficiently general to include virtually all the usual discrete-time stochastic control models that appear in applications to engineering, economics, mathematical population processes, operations research, and management science.

Adaptive Markov Control Processes (Paperback, Softcover reprint of the original 1st ed. 1989): Onesimo Hernandez-Lerma Adaptive Markov Control Processes (Paperback, Softcover reprint of the original 1st ed. 1989)
Onesimo Hernandez-Lerma
R1,484 Discovery Miles 14 840 Ships in 10 - 15 working days

This book is concerned with a class of discrete-time stochastic control processes known as controlled Markov processes (CMP's), also known as Markov decision processes or Markov dynamic programs. Starting in the mid-1950swith Richard Bellman, many contributions to CMP's have been made, and applications to engineering, statistics and operations research, among other areas, have also been developed. The purpose of this book is to present some recent developments on the theory of adaptive CMP's, i. e., CMP's that depend on unknown parameters. Thus at each decision time, the controller or decision-maker must estimate the true parameter values, and then adapt the control actions to the estimated values. We do not intend to describe all aspects of stochastic adaptive control; rather, the selection of material reflects our own research interests. The prerequisite for this book is a knowledgeof real analysis and prob ability theory at the level of, say, Ash (1972) or Royden (1968), but no previous knowledge of control or decision processes is required. The pre sentation, on the other hand, is meant to beself-contained, in the sensethat whenever a result from analysisor probability is used, it is usually stated in full and references are supplied for further discussion, if necessary. Several appendices are provided for this purpose. The material is divided into six chapters. Chapter 1 contains the basic definitions about the stochastic control problems we are interested in; a brief description of some applications is also provided."

Continuous-Time Markov Decision Processes - Theory and Applications (Paperback, 2009 ed.): Xianping Guo, Onesimo Hernandez-Lerma Continuous-Time Markov Decision Processes - Theory and Applications (Paperback, 2009 ed.)
Xianping Guo, Onesimo Hernandez-Lerma
R3,119 Discovery Miles 31 190 Ships in 10 - 15 working days

Continuous-time Markov decision processes (MDPs), also known as controlled Markov chains, are used for modeling decision-making problems that arise in operations research (for instance, inventory, manufacturing, and queueing systems), computer science, communications engineering, control of populations (such as fisheries and epidemics), and management science, among many other fields. This volume provides a unified, systematic, self-contained presentation of recent developments on the theory and applications of continuous-time MDPs. The MDPs in this volume include most of the cases that arise in applications, because they allow unbounded transition and reward/cost rates. Much of the material appears for the first time in book form.

Discrete-Time Markov Control Processes - Basic Optimality Criteria (Paperback, Softcover reprint of the original 1st ed. 1996):... Discrete-Time Markov Control Processes - Basic Optimality Criteria (Paperback, Softcover reprint of the original 1st ed. 1996)
Onesimo Hernandez-Lerma, Jean B. Lasserre
R4,331 Discovery Miles 43 310 Ships in 10 - 15 working days

This book presents the first part of a planned two-volume series devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes (MCPs). Interest is mainly confined to MCPs with Borel state and control (or action) spaces, and possibly unbounded costs and noncompact control constraint sets. MCPs are a class of stochastic control problems, also known as Markov decision processes, controlled Markov processes, or stochastic dynamic pro grams; sometimes, particularly when the state space is a countable set, they are also called Markov decision (or controlled Markov) chains. Regardless of the name used, MCPs appear in many fields, for example, engineering, economics, operations research, statistics, renewable and nonrenewable re source management, (control of) epidemics, etc. However, most of the lit erature (say, at least 90%) is concentrated on MCPs for which (a) the state space is a countable set, and/or (b) the costs-per-stage are bounded, and/or (c) the control constraint sets are compact. But curiously enough, the most widely used control model in engineering and economics--namely the LQ (Linear system/Quadratic cost) model-satisfies none of these conditions. Moreover, when dealing with "partially observable" systems) a standard approach is to transform them into equivalent "completely observable" sys tems in a larger state space (in fact, a space of probability measures), which is uncountable even if the original state process is finite-valued.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Russian Trade Unions and Industrial…
S. Ashwin, S. Clarke Hardcover R2,906 Discovery Miles 29 060
Sing we and chant it
Bob Chilcott Sheet music R112 Discovery Miles 1 120
Groups, Norms and Practices - Essays on…
Ladislav Koren, Hans Bernhard Schmid, … Hardcover R1,540 Discovery Miles 15 400
Governments, Labour, and the Law in…
Mark Curthoys Hardcover R6,541 R5,584 Discovery Miles 55 840
Of Moses and Marx - Folk Ideology and…
David P. Shuldiner Hardcover R2,785 Discovery Miles 27 850
Labour Revolt in Britain 1910-14
Ralph Darlington Paperback R533 Discovery Miles 5 330
Power Despite Precarity - Strategies for…
Joe Berry, Helena Worthen Paperback R557 Discovery Miles 5 570
Rapid Weight Loss Hypnosis - Burn Fat…
Kaizen Mindfulness Meditations Hardcover R586 R531 Discovery Miles 5 310
Modal Logic as Metaphysics
Timothy Williamson Hardcover R1,921 Discovery Miles 19 210
Wicca For Beginners - The Guide to…
Amy White Hardcover R570 R519 Discovery Miles 5 190

 

Partners