0
Your cart

Your cart is empty

Browse All Departments
  • All Departments
Price
Status
Brand

Showing 1 - 4 of 4 matches in All Departments

Large Deviations For Performance Analysis - Queues, Communication and Computing (Paperback): Alan Weiss, Adam Shwartz Large Deviations For Performance Analysis - Queues, Communication and Computing (Paperback)
Alan Weiss, Adam Shwartz
R1,489 Discovery Miles 14 890 Ships in 12 - 17 working days

Originally published in 1995, Large Deviations for Performance Analysis consists of two synergistic parts. The first half develops the theory of large deviations from the beginning, through recent results on the theory for processes with boundaries, keeping to a very narrow path: continuous-time, discrete-state processes. By developing only what is needed for the applications, the theory is kept to a manageable level, both in terms of length and in terms of difficulty. Within its scope, the treatment is detailed, comprehensive and self-contained. As the book shows, there are sufficiently many interesting applications of jump Markov processes to warrant a special treatment. The second half is a collection of applications developed at Bell Laboratories. The applications cover large areas of the theory of communication networks: circuit switched transmission, packet transmission, multiple access channels, and the M/M/1 queue. Aspects of parallel computation are covered as well including, basics of job allocation, rollback-based parallel simulation, assorted priority queueing models that might be used in performance models of various computer architectures, and asymptotic coupling of processors. These applications are thoroughly analysed using the tools developed in the first half of the book.

Large Deviations For Performance Analysis - Queues, Communication and Computing (Hardcover): Alan Weiss, Adam Shwartz Large Deviations For Performance Analysis - Queues, Communication and Computing (Hardcover)
Alan Weiss, Adam Shwartz
R4,492 Discovery Miles 44 920 Ships in 12 - 17 working days

Originally published in 1995, Large Deviations for Performance Analysis consists of two synergistic parts. The first half develops the theory of large deviations from the beginning, through recent results on the theory for processes with boundaries, keeping to a very narrow path: continuous-time, discrete-state processes. By developing only what is needed for the applications, the theory is kept to a manageable level, both in terms of length and in terms of difficulty. Within its scope, the treatment is detailed, comprehensive and self-contained. As the book shows, there are sufficiently many interesting applications of jump Markov processes to warrant a special treatment. The second half is a collection of applications developed at Bell Laboratories. The applications cover large areas of the theory of communication networks: circuit switched transmission, packet transmission, multiple access channels, and the M/M/1 queue. Aspects of parallel computation are covered as well including, basics of job allocation, rollback-based parallel simulation, assorted priority queueing models that might be used in performance models of various computer architectures, and asymptotic coupling of processors. These applications are thoroughly analysed using the tools developed in the first half of the book.

Handbook of Markov Decision Processes - Methods and Applications (Paperback, Softcover reprint of the original 1st ed. 2002):... Handbook of Markov Decision Processes - Methods and Applications (Paperback, Softcover reprint of the original 1st ed. 2002)
Eugene A. Feinberg, Adam Shwartz
R9,920 Discovery Miles 99 200 Ships in 10 - 15 working days

Eugene A. Feinberg Adam Shwartz This volume deals with the theory of Markov Decision Processes (MDPs) and their applications. Each chapter was written by a leading expert in the re spective area. The papers cover major research areas and methodologies, and discuss open questions and future research directions. The papers can be read independently, with the basic notation and concepts ofSection 1.2. Most chap ters should be accessible by graduate or advanced undergraduate students in fields of operations research, electrical engineering, and computer science. 1.1 AN OVERVIEW OF MARKOV DECISION PROCESSES The theory of Markov Decision Processes-also known under several other names including sequential stochastic optimization, discrete-time stochastic control, and stochastic dynamic programming-studiessequential optimization ofdiscrete time stochastic systems. The basic object is a discrete-time stochas tic system whose transition mechanism can be controlled over time. Each control policy defines the stochastic process and values of objective functions associated with this process. The goal is to select a "good" control policy. In real life, decisions that humans and computers make on all levels usually have two types ofimpacts: (i) they cost orsavetime, money, or other resources, or they bring revenues, as well as (ii) they have an impact on the future, by influencing the dynamics. In many situations, decisions with the largest immediate profit may not be good in view offuture events. MDPs model this paradigm and provide results on the structure and existence of good policies and on methods for their calculation."

Handbook of Markov Decision Processes - Methods and Applications (Hardcover, 2002 ed.): Eugene A. Feinberg, Adam Shwartz Handbook of Markov Decision Processes - Methods and Applications (Hardcover, 2002 ed.)
Eugene A. Feinberg, Adam Shwartz
R10,196 Discovery Miles 101 960 Ships in 10 - 15 working days

The theory of Markov Decision Processes - also known under several other names including sequential stochastic optimization, discrete-time stochastic control, and stochastic dynamic programming - studies sequential optimization of discrete time stochastic systems. Fundamentally, this is a methodology that examines and analyzes a discrete-time stochastic system whose transition mechanism can be controlled over time. Each control policy defines the stochastic process and values of objective functions associated with this process. Its objective is to select a "good" control policy. In real life, decisions that humans and computers make on all levels usually have two types of impacts: (i) they cost or save time, money, or other resources, or they bring revenues, as well as (ii) they have an impact on the future, by influencing the dynamics. In many situations, decisions with the largest immediate profit may not be good in view of future events. Markov Decision Processes (MDPs) model this paradigm and provide results on the structure and existence of good policies and on methods for their calculations. MDPs are attractive to many researchers because they are important both from the practical and the intellectual points of view. MDPs provide tools for the solution of important real-life problems. In particular, many business and engineering applications use MDP models. Analysis of various problems arising in MDPs leads to a large variety of interesting mathematical and computational problems. Accordingly, the Handbook of Markov Decision Processes is split into three parts: Part I deals with models with finite state and action spaces and Part II deals with infinite state problems, and Part IIIexamines specific applications. Individual chapters are written by leading experts on the subject.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
The Girl On the Train
Emily Blunt, Rebecca Ferguson, … Blu-ray disc  (1)
R64 Discovery Miles 640
Jurassic Park Trilogy Collection
Sam Neill, Laura Dern, … Blu-ray disc  (1)
R362 R311 Discovery Miles 3 110
Mellerware Non-Stick Vapour ll Steam…
R348 Discovery Miles 3 480
Sony PlayStation 5 DualSense Wireless…
 (5)
R1,599 R1,479 Discovery Miles 14 790
Bantex B9875 A5 Record Card File Box…
 (1)
R125 R112 Discovery Miles 1 120
Bostik Clear Gel in Box (25ml)
R29 Discovery Miles 290
Guardians Of The Galaxy - Awesome Mix…
Various Artists CD  (5)
R195 R174 Discovery Miles 1 740
Butterfly A4 80gsm Paper Pad - 2 Colour…
R83 Discovery Miles 830
Sport Game Throw Ring Set (5 Rings)
R199 Discovery Miles 1 990
1 Litre Unicorn Waterbottle
R70 Discovery Miles 700

 

Partners