![]() |
![]() |
Your cart is empty |
||
Showing 1 - 8 of 8 matches in All Departments
Semidefinite and conic optimization is a major and thriving research area within the optimization community. Although semidefinite optimization has been studied (under different names) since at least the 1940s, its importance grew immensely during the 1990s after polynomial-time interior-point methods for linear optimization were extended to solve semidefinite optimization problems. Since the beginning of the 21st century, not only has research into semidefinite and conic optimization continued unabated, but also a fruitful interaction has developed with algebraic geometry through the close connections between semidefinite matrices and polynomial optimization. This has brought about important new results and led to an even higher level of research activity. This "Handbook on Semidefinite, Conic and Polynomial Optimization "provides the reader with a snapshot of the state-of-the-art in the growing and mutually enriching areas of semidefinite optimization, conic optimization, and polynomial optimization. It contains a compendium of the recent research activity that has taken place in these thrilling areas, and will appeal to doctoral" "students, young graduates, and experienced researchers alike. The Handbook's thirty-one chapters are organized into four parts: "Theory," covering significant theoretical developments as well as the interactions between conic optimization and polynomial optimization;"Algorithms," documenting the directions of current algorithmic development;"Software," providing an overview of the state-of-the-art;"Applications," dealing with the application areas where semidefinite and conic optimization has made a significant impact in recent years.
Devoted to a systematic exposition of some recent developments in
the theory of discrete-time Markov control processes, the text is
mainly confined to MCPs with Borel state and control spaces.
Although the book follows on from the author's earlier work, an
important feature of this volume is that it is self-contained and
can thus be read independently of the first.
This book presents the first part of a planned two-volume series devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes (MCPs). Interest is mainly confined to MCPs with Borel state and control (or action) spaces, and possibly unbounded costs and noncompact control constraint sets. MCPs are a class of stochastic control problems, also known as Markov decision processes, controlled Markov processes, or stochastic dynamic pro grams; sometimes, particularly when the state space is a countable set, they are also called Markov decision (or controlled Markov) chains. Regardless of the name used, MCPs appear in many fields, for example, engineering, economics, operations research, statistics, renewable and nonrenewable re source management, (control of) epidemics, etc. However, most of the lit erature (say, at least 90%) is concentrated on MCPs for which (a) the state space is a countable set, and/or (b) the costs-per-stage are bounded, and/or (c) the control constraint sets are compact. But curiously enough, the most widely used control model in engineering and economics--namely the LQ (Linear system/Quadratic cost) model-satisfies none of these conditions. Moreover, when dealing with "partially observable" systems) a standard approach is to transform them into equivalent "completely observable" sys tems in a larger state space (in fact, a space of probability measures), which is uncountable even if the original state process is finite-valued."
This book is about discrete-time, time-homogeneous, Markov chains (Mes) and their ergodic behavior. To this end, most of the material is in fact about stable Mes, by which we mean Mes that admit an invariant probability measure. To state this more precisely and give an overview of the questions we shall be dealing with, we will first introduce some notation and terminology. Let (X,B) be a measurable space, and consider a X-valued Markov chain ~. = {~k' k = 0, 1, ... } with transition probability function (t.pJ.) P(x, B), i.e., P(x, B) := Prob (~k+1 E B I ~k = x) for each x E X, B E B, and k = 0,1, .... The Me ~. is said to be stable if there exists a probability measure (p.m.) /.l on B such that (*) VB EB. /.l(B) = Ix /.l(dx) P(x, B) If (*) holds then /.l is called an invariant p.m. for the Me ~. (or the t.p.f. P).
Semidefinite and conic optimization is a major and thriving research area within the optimization community. Although semidefinite optimization has been studied (under different names) since at least the 1940s, its importance grew immensely during the 1990s after polynomial-time interior-point methods for linear optimization were extended to solve semidefinite optimization problems. Since the beginning of the 21st century, not only has research into semidefinite and conic optimization continued unabated, but also a fruitful interaction has developed with algebraic geometry through the close connections between semidefinite matrices and polynomial optimization. This has brought about important new results and led to an even higher level of research activity. This Handbook on Semidefinite, Conic and Polynomial Optimization provides the reader with a snapshot of the state-of-the-art in the growing and mutually enriching areas of semidefinite optimization, conic optimization, and polynomial optimization. It contains a compendium of the recent research activity that has taken place in these thrilling areas, and will appeal to doctoral students, young graduates, and experienced researchers alike. The Handbook's thirty-one chapters are organized into four parts: Theory, covering significant theoretical developments as well as the interactions between conic optimization and polynomial optimization; Algorithms, documenting the directions of current algorithmic development; Software, providing an overview of the state-of-the-art; Applications, dealing with the application areas where semidefinite and conic optimization has made a significant impact in recent years.
This book is about discrete-time, time-homogeneous, Markov chains (Mes) and their ergodic behavior. To this end, most of the material is in fact about stable Mes, by which we mean Mes that admit an invariant probability measure. To state this more precisely and give an overview of the questions we shall be dealing with, we will first introduce some notation and terminology. Let (X,B) be a measurable space, and consider a X-valued Markov chain ~. = {~k' k = 0, 1, ... } with transition probability function (t.pJ.) P(x, B), i.e., P(x, B) := Prob (~k+1 E B I ~k = x) for each x E X, B E B, and k = 0,1, .... The Me ~. is said to be stable if there exists a probability measure (p.m.) /.l on B such that (*) VB EB. /.l(B) = Ix /.l(dx) P(x, B) If (*) holds then /.l is called an invariant p.m. for the Me ~. (or the t.p.f. P).
Devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes, the text is mainly confined to MCPs with Borel state and control spaces. Although the book follows on from the author's earlier work, an important feature of this volume is that it is self-contained and can thus be read independently of the first. The control model studied is sufficiently general to include virtually all the usual discrete-time stochastic control models that appear in applications to engineering, economics, mathematical population processes, operations research, and management science.
This book presents the first part of a planned two-volume series devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes (MCPs). Interest is mainly confined to MCPs with Borel state and control (or action) spaces, and possibly unbounded costs and noncompact control constraint sets. MCPs are a class of stochastic control problems, also known as Markov decision processes, controlled Markov processes, or stochastic dynamic pro grams; sometimes, particularly when the state space is a countable set, they are also called Markov decision (or controlled Markov) chains. Regardless of the name used, MCPs appear in many fields, for example, engineering, economics, operations research, statistics, renewable and nonrenewable re source management, (control of) epidemics, etc. However, most of the lit erature (say, at least 90%) is concentrated on MCPs for which (a) the state space is a countable set, and/or (b) the costs-per-stage are bounded, and/or (c) the control constraint sets are compact. But curiously enough, the most widely used control model in engineering and economics--namely the LQ (Linear system/Quadratic cost) model-satisfies none of these conditions. Moreover, when dealing with "partially observable" systems) a standard approach is to transform them into equivalent "completely observable" sys tems in a larger state space (in fact, a space of probability measures), which is uncountable even if the original state process is finite-valued.
|
![]() ![]() You may like...
|