0
Your cart

Your cart is empty

Browse All Departments
  • All Departments
Price
  • R5,000 - R10,000 (1)
  • R10,000+ (1)
  • -
Status
Brand

Showing 1 - 2 of 2 matches in All Departments

Handbook of Markov Decision Processes - Methods and Applications (Hardcover, 2002 ed.): Eugene A. Feinberg, Adam Shwartz Handbook of Markov Decision Processes - Methods and Applications (Hardcover, 2002 ed.)
Eugene A. Feinberg, Adam Shwartz
R9,332 Discovery Miles 93 320 Ships in 10 - 15 working days

The theory of Markov Decision Processes - also known under several other names including sequential stochastic optimization, discrete-time stochastic control, and stochastic dynamic programming - studies sequential optimization of discrete time stochastic systems. Fundamentally, this is a methodology that examines and analyzes a discrete-time stochastic system whose transition mechanism can be controlled over time. Each control policy defines the stochastic process and values of objective functions associated with this process. Its objective is to select a "good" control policy. In real life, decisions that humans and computers make on all levels usually have two types of impacts: (i) they cost or save time, money, or other resources, or they bring revenues, as well as (ii) they have an impact on the future, by influencing the dynamics. In many situations, decisions with the largest immediate profit may not be good in view of future events. Markov Decision Processes (MDPs) model this paradigm and provide results on the structure and existence of good policies and on methods for their calculations. MDPs are attractive to many researchers because they are important both from the practical and the intellectual points of view. MDPs provide tools for the solution of important real-life problems. In particular, many business and engineering applications use MDP models. Analysis of various problems arising in MDPs leads to a large variety of interesting mathematical and computational problems. Accordingly, the Handbook of Markov Decision Processes is split into three parts: Part I deals with models with finite state and action spaces and Part II deals with infinite state problems, and Part IIIexamines specific applications. Individual chapters are written by leading experts on the subject.

Handbook of Markov Decision Processes - Methods and Applications (Paperback, Softcover reprint of the original 1st ed. 2002):... Handbook of Markov Decision Processes - Methods and Applications (Paperback, Softcover reprint of the original 1st ed. 2002)
Eugene A. Feinberg, Adam Shwartz
R10,551 Discovery Miles 105 510 Ships in 10 - 15 working days

Eugene A. Feinberg Adam Shwartz This volume deals with the theory of Markov Decision Processes (MDPs) and their applications. Each chapter was written by a leading expert in the re spective area. The papers cover major research areas and methodologies, and discuss open questions and future research directions. The papers can be read independently, with the basic notation and concepts ofSection 1.2. Most chap ters should be accessible by graduate or advanced undergraduate students in fields of operations research, electrical engineering, and computer science. 1.1 AN OVERVIEW OF MARKOV DECISION PROCESSES The theory of Markov Decision Processes-also known under several other names including sequential stochastic optimization, discrete-time stochastic control, and stochastic dynamic programming-studiessequential optimization ofdiscrete time stochastic systems. The basic object is a discrete-time stochas tic system whose transition mechanism can be controlled over time. Each control policy defines the stochastic process and values of objective functions associated with this process. The goal is to select a "good" control policy. In real life, decisions that humans and computers make on all levels usually have two types ofimpacts: (i) they cost orsavetime, money, or other resources, or they bring revenues, as well as (ii) they have an impact on the future, by influencing the dynamics. In many situations, decisions with the largest immediate profit may not be good in view offuture events. MDPs model this paradigm and provide results on the structure and existence of good policies and on methods for their calculation."

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
The Naming Of He-Who-Has-No-Name
Patricia Schonstein Paperback R295 R277 Discovery Miles 2 770
Factors Affecting Firm Competitiveness…
Aspasia Vlachvei, Ourania Notta, … Hardcover R5,312 Discovery Miles 53 120
Khloe Koala & The Bush Fire
Aliyyah H Ali Hardcover R603 R546 Discovery Miles 5 460
Gregers Brinch: Kaverdalen
Gregers Brinch, Rohan de Saram, … DVD R477 Discovery Miles 4 770
Fusarium: Genomics, Molecular and…
D.W. Brown, Robert Proctor Hardcover R5,658 Discovery Miles 56 580
Agnus Dei
Stuart Pendred CD R663 Discovery Miles 6 630
DNA and Biotechnology
Molly Fitzgerald-Hayes, Frieda Reichsman Hardcover R2,030 Discovery Miles 20 300
Historical and Miscellaneous Questions
Richmal Mangnall Paperback R752 Discovery Miles 7 520
Narasinha Mehta of Gujarat - A Legacy of…
Neelima Shukla-Bhatt Hardcover R4,086 Discovery Miles 40 860
Participatory Pedagogy - Emerging…
Martha Ann Davis McGaw, Simone McGaw-Evans Hardcover R4,686 Discovery Miles 46 860

 

Partners