|
Showing 1 - 4 of
4 matches in All Departments
Originally published in 1995, Large Deviations for Performance
Analysis consists of two synergistic parts. The first half develops
the theory of large deviations from the beginning, through recent
results on the theory for processes with boundaries, keeping to a
very narrow path: continuous-time, discrete-state processes. By
developing only what is needed for the applications, the theory is
kept to a manageable level, both in terms of length and in terms of
difficulty. Within its scope, the treatment is detailed,
comprehensive and self-contained. As the book shows, there are
sufficiently many interesting applications of jump Markov processes
to warrant a special treatment. The second half is a collection of
applications developed at Bell Laboratories. The applications cover
large areas of the theory of communication networks: circuit
switched transmission, packet transmission, multiple access
channels, and the M/M/1 queue. Aspects of parallel computation are
covered as well including, basics of job allocation, rollback-based
parallel simulation, assorted priority queueing models that might
be used in performance models of various computer architectures,
and asymptotic coupling of processors. These applications are
thoroughly analysed using the tools developed in the first half of
the book.
Originally published in 1995, Large Deviations for Performance
Analysis consists of two synergistic parts. The first half develops
the theory of large deviations from the beginning, through recent
results on the theory for processes with boundaries, keeping to a
very narrow path: continuous-time, discrete-state processes. By
developing only what is needed for the applications, the theory is
kept to a manageable level, both in terms of length and in terms of
difficulty. Within its scope, the treatment is detailed,
comprehensive and self-contained. As the book shows, there are
sufficiently many interesting applications of jump Markov processes
to warrant a special treatment. The second half is a collection of
applications developed at Bell Laboratories. The applications cover
large areas of the theory of communication networks: circuit
switched transmission, packet transmission, multiple access
channels, and the M/M/1 queue. Aspects of parallel computation are
covered as well including, basics of job allocation, rollback-based
parallel simulation, assorted priority queueing models that might
be used in performance models of various computer architectures,
and asymptotic coupling of processors. These applications are
thoroughly analysed using the tools developed in the first half of
the book.
Eugene A. Feinberg Adam Shwartz This volume deals with the theory
of Markov Decision Processes (MDPs) and their applications. Each
chapter was written by a leading expert in the re spective area.
The papers cover major research areas and methodologies, and
discuss open questions and future research directions. The papers
can be read independently, with the basic notation and concepts
ofSection 1.2. Most chap ters should be accessible by graduate or
advanced undergraduate students in fields of operations research,
electrical engineering, and computer science. 1.1 AN OVERVIEW OF
MARKOV DECISION PROCESSES The theory of Markov Decision
Processes-also known under several other names including sequential
stochastic optimization, discrete-time stochastic control, and
stochastic dynamic programming-studiessequential optimization
ofdiscrete time stochastic systems. The basic object is a
discrete-time stochas tic system whose transition mechanism can be
controlled over time. Each control policy defines the stochastic
process and values of objective functions associated with this
process. The goal is to select a "good" control policy. In real
life, decisions that humans and computers make on all levels
usually have two types ofimpacts: (i) they cost orsavetime, money,
or other resources, or they bring revenues, as well as (ii) they
have an impact on the future, by influencing the dynamics. In many
situations, decisions with the largest immediate profit may not be
good in view offuture events. MDPs model this paradigm and provide
results on the structure and existence of good policies and on
methods for their calculation."
The theory of Markov Decision Processes - also known under several
other names including sequential stochastic optimization,
discrete-time stochastic control, and stochastic dynamic
programming - studies sequential optimization of discrete time
stochastic systems. Fundamentally, this is a methodology that
examines and analyzes a discrete-time stochastic system whose
transition mechanism can be controlled over time. Each control
policy defines the stochastic process and values of objective
functions associated with this process. Its objective is to select
a "good" control policy. In real life, decisions that humans and
computers make on all levels usually have two types of impacts: (i)
they cost or save time, money, or other resources, or they bring
revenues, as well as (ii) they have an impact on the future, by
influencing the dynamics. In many situations, decisions with the
largest immediate profit may not be good in view of future events.
Markov Decision Processes (MDPs) model this paradigm and provide
results on the structure and existence of good policies and on
methods for their calculations. MDPs are attractive to many
researchers because they are important both from the practical and
the intellectual points of view. MDPs provide tools for the
solution of important real-life problems. In particular, many
business and engineering applications use MDP models. Analysis of
various problems arising in MDPs leads to a large variety of
interesting mathematical and computational problems. Accordingly,
the Handbook of Markov Decision Processes is split into three
parts: Part I deals with models with finite state and action spaces
and Part II deals with infinite state problems, and Part
IIIexamines specific applications. Individual chapters are written
by leading experts on the subject.
|
|