Networked control systems are increasingly ubiquitous today,
with applications ranging from vehicle communication and adaptive
power grids to space exploration and economics. The optimal design
of such systems presents major challenges, requiring tools from
various disciplines within applied mathematics such as
decentralized control, stochastic control, information theory, and
quantization.
A thorough, self-contained book, "Stochastic Networked Control
Systems: Stabilization and Optimization under Information
Constraints" aims to connect these diverse disciplines with
precision and rigor, while conveying design guidelines to
controller architects. Unique in the literature, it lays a
comprehensive theoretical foundation for the study of networked
control systems, and introduces an array of concrete tools for work
in the field. Salient features included:
. Characterization, comparison and optimal design of information
structures in static and dynamic teams. Operational, structural and
topological properties of information structures in optimal
decision making, with a systematic program for generating optimal
encoding and control policies. The notion of signaling, and its
utilization in stabilization and optimization of decentralized
control systems.
. Presentation of mathematical methods for stochastic stability
of networked control systems using random-time, state-dependent
drift conditions and martingale methods.
. Characterization and study of information channels leading to
various forms of stochastic stability such as stationarity,
ergodicity, and quadratic stability; and connections with
information and quantization theories. Analysis of various classes
of centralized and decentralized control systems.
. Jointly optimal design of encoding and control policies over
various information channels and under general optimization
criteria, including a detailed coverage of
linear-quadratic-Gaussian models.
. Decentralized agreement and dynamic optimization under
information constraints.
This monograph is geared toward a broad audience of academic and
industrial researchers interested in control theory, information
theory, optimization, economics, and applied mathematics. It could
likewise serve as a supplemental graduate text. The reader is
expected to have some familiarity with linear systems, stochastic
processes, and Markov chains, but the necessary background can also
be acquired in part through the four appendices included at the
end.
. Characterization, comparison and optimal design of information
structures in static and dynamic teams. Operational, structural and
topological properties of information structures in optimal
decision making, with a systematic program for generating optimal
encoding and control policies. The notion of signaling, and its
utilization in stabilization and optimization of decentralized
control systems.
. Presentation of mathematical methods for stochastic stability
of networked control systems using random-time, state-dependent
drift conditions and martingale methods.
. Characterization and study of information channels leading to
various forms of stochastic stability such as stationarity,
ergodicity, and quadratic stability; and connections with
information and quantization theories. Analysis of various classes
of centralized and decentralized control systems.
. Jointly optimal design of encoding and control policies over
various information channels and under general optimization
criteria, including a detailed coverage of
linear-quadratic-Gaussian models.
. Decentralized agreement and dynamic optimization under
information constraints.
This monograph is geared toward a broad audience of academic and
industrial researchers interested in control theory, information
theory, optimization, economics, and applied mathematics. It could
likewise serve as a supplemental graduate text. The reader is
expected to have some familiarity with linear systems, stochastic
processes, and Markov chains, but the necessary background can also
be acquired in part through the four appendices included at the
end.