|
Showing 1 - 5 of
5 matches in All Departments
This research monograph summarizes a line of research that maps
certain classical problems of discrete mathematics and operations
research - such as the Hamiltonian Cycle and the Travelling
Salesman Problems - into convex domains where continuum analysis
can be carried out. Arguably, the inherent difficulty of these, now
classical, problems stems precisely from the discrete nature of
domains in which these problems are posed. The convexification of
domains underpinning these results is achieved by assigning
probabilistic interpretation to key elements of the original
deterministic problems. In particular, the approaches summarized
here build on a technique that embeds Hamiltonian Cycle and
Travelling Salesman Problems in a structured singularly perturbed
Markov decision process. The unifying idea is to interpret
subgraphs traced out by deterministic policies (including
Hamiltonian cycles, if any) as extreme points of a convex
polyhedron in a space filled with randomized policies. The above
innovative approach has now evolved to the point where there are
many, both theoretical and algorithmic, results that exploit the
nexus between graph theoretic structures and both probabilistic and
algebraic entities of related Markov chains. The latter include
moments of first return times, limiting frequencies of visits to
nodes, or the spectra of certain matrices traditionally associated
with the analysis of Markov chains. However, these results and
algorithms are dispersed over many research papers appearing in
journals catering to disparate audiences. As a result, the
published manuscripts are often written in a very terse manner and
use disparate notation, thereby making it difficult for new
researchers to make use of the many reported advances. Hence the
main purpose of this book is to present a concise and yet easily
accessible synthesis of the majority of the theoretical and
algorithmic results obtained so far. In addition, the book
discusses numerous open questions and problems that arise from this
body of work and which are yet to be fully solved. The approach
casts the Hamiltonian Cycle Problem in a mathematical framework
that permits analytical concepts and techniques, not used hitherto
in this context, to be brought to bear to further clarify both the
underlying difficulty of NP-completeness of this problem and the
relative exceptionality of truly difficult instances. Finally, the
material is arranged in such a manner that the introductory
chapters require very little mathematical background and discuss
instances of graphs with interesting structures that motivated a
lot of the research in this topic. More difficult results are
introduced later and are illustrated with numerous examples.
This research monograph summarizes a line of research that maps
certain classical problems of discrete mathematics and operations
research - such as the Hamiltonian Cycle and the Travelling
Salesman Problems - into convex domains where continuum analysis
can be carried out. Arguably, the inherent difficulty of these, now
classical, problems stems precisely from the discrete nature of
domains in which these problems are posed. The convexification of
domains underpinning these results is achieved by assigning
probabilistic interpretation to key elements of the original
deterministic problems. In particular, the approaches summarized
here build on a technique that embeds Hamiltonian Cycle and
Travelling Salesman Problems in a structured singularly perturbed
Markov decision process. The unifying idea is to interpret
subgraphs traced out by deterministic policies (including
Hamiltonian cycles, if any) as extreme points of a convex
polyhedron in a space filled with randomized policies. The above
innovative approach has now evolved to the point where there are
many, both theoretical and algorithmic, results that exploit the
nexus between graph theoretic structures and both probabilistic and
algebraic entities of related Markov chains. The latter include
moments of first return times, limiting frequencies of visits to
nodes, or the spectra of certain matrices traditionally associated
with the analysis of Markov chains. However, these results and
algorithms are dispersed over many research papers appearing in
journals catering to disparate audiences. As a result, the
published manuscripts are often written in a very terse manner and
use disparate notation, thereby making it difficult for new
researchers to make use of the many reported advances. Hence the
main purpose of this book is to present a concise and yet easily
accessible synthesis of the majority of the theoretical and
algorithmic results obtained so far. In addition, the book
discusses numerous open questions and problems that arise from this
body of work and which are yet to be fully solved. The approach
casts the Hamiltonian Cycle Problem in a mathematical framework
that permits analytical concepts and techniques, not used hitherto
in this context, to be brought to bear to further clarify both the
underlying difficulty of NP-completeness of this problem and the
relative exceptionality of truly difficult instances. Finally, the
material is arranged in such a manner that the introductory
chapters require very little mathematical background and discuss
instances of graphs with interesting structures that motivated a
lot of the research in this topic. More difficult results are
introduced later and are illustrated with numerous examples.
This book presents a selection of topics from probability theory.
Essentially, the topics chosen are those that are likely to be the
most useful to someone planning to pursue research in the modern
theory of stochastic processes. The prospective reader is assumed
to have good mathematical maturity. In particular, he should have
prior exposure to basic probability theory at the level of, say,
K.L. Chung's 'Elementary probability theory with stochastic
processes' (Springer-Verlag, 1974) and real and functional analysis
at the level of Royden's 'Real analysis' (Macmillan, 1968). The
first chapter is a rapid overview of the basics. Each subsequent
chapter deals with a separate topic in detail. There is clearly
some selection involved and therefore many omissions, but that
cannot be helped in a book of this size. The style is deliberately
terse to enforce active learning. Thus several tidbits of deduction
are left to the reader as labelled exercises in the main text of
each chapter. In addition, there are supplementary exercises at the
end. In the preface to his classic text on probability
('Probability', Addison Wesley, 1968), Leo Breiman speaks of the
right and left hands of probability."
This comprehensive volume on ergodic control for diffusions
highlights intuition alongside technical arguments. A concise
account of Markov process theory is followed by a complete
development of the fundamental issues and formalisms in control of
diffusions. This then leads to a comprehensive treatment of ergodic
control, a problem that straddles stochastic control and the
ergodic theory of Markov processes. The interplay between the
probabilistic and ergodic-theoretic aspects of the problem, notably
the asymptotics of empirical measures on one hand, and the analytic
aspects leading to a characterization of optimality via the
associated Hamilton Jacobi Bellman equation on the other, is
clearly revealed. The more abstract controlled martingale problem
is also presented, in addition to many other related issues and
models. Assuming only graduate-level probability and analysis, the
authors develop the theory in a manner that makes it accessible to
users in applied mathematics, engineering, finance and operations
research.
This simple, compact toolkit for designing and analyzing stochastic
approximation algorithms requires only basic literacy in
probability and differential equations. Yet these algorithms have
powerful applications in control and communications engineering,
artificial intelligence and economic modelling. The dynamical
systems viewpoint treats an algorithm as a noisy discretization of
a limiting differential equation and argues that, under reasonable
hypotheses, it tracks the asymptotic behaviour of the differential
equation with probability one. The differential equation, which can
usually be obtained by inspection, is easier to analyze. Novel
topics include finite-time behaviour, multiple timescales and
asynchronous implementation. There is a useful taxonomy of
applications, with concrete examples from engineering and
economics. Notably it covers variants of stochastic gradient-based
optimization schemes, fixed-point solvers, which are commonplace in
learning algorithms for approximate dynamic programming, and some
models of collective behaviour. Three appendices give background on
differential equations and probability.
|
You may like...
Higher Truth
Chris Cornell
CD
(1)
R143
Discovery Miles 1 430
|