|
Showing 1 - 2 of
2 matches in All Departments
Presents a fully decentralized method for dynamic network energy
management based on message passing between devices. It considers a
network of devices, such as generators, fixed loads, deferrable
loads, and storage devices, each with its own dynamic constraints
and objective, connected by AC and DC lines. The problem is to
minimize the total network objective subject to the device and line
constraints, over a given time horizon. This is a large
optimization problem, with variables for consumption or generation
for each device, power flow for each line, and voltage phase angles
at AC buses, in each time period. This text develops a
decentralized method for solving this problem called proximal
message passing. The method is iterative: at each step, each device
exchanges simple messages with its neighbors in the network and
then solves its own optimization problem, minimizing its own
objective function, augmented by a term determined by the messages
it has received. It is shown that this message passing method
converges to a solution when the device objective and constraints
are convex. The method is completely decentralized, and needs no
global coordination other than synchronizing iterations; the
problems to be solved by each device can typically be solved
extremely efficiently and in parallel. The method is fast enough
that even a serial implementation can solve substantial problems in
reasonable time frames. Results for several numerical experiments
are reported, demonstrating the method's speed and scaling,
including the solution of a problem instance with over ten million
variables in under fifty minutes for a serial implementation; with
decentralized computing, the solve time would be less than one
second.
Many problems of recent interest in statistics and machine learning
can be posed in the framework of convex optimization. Due to the
explosion in size and complexity of modern datasets, it is
increasingly important to be able to solve problems with a very
large number of features or training examples. As a result, both
the decentralized collection or storage of these datasets as well
as accompanying distributed solution methods are either necessary
or at least highly desirable. Distributed Optimization and
Statistical Learning via the Alternating Direction Method of
Multipliers argues that the alternating direction method of
multipliers is well suited to distributed convex optimization, and
in particular to large-scale problems arising in statistics,
machine learning, and related areas. The method was developed in
the 1970s, with roots in the 1950s, and is equivalent or closely
related to many other algorithms, such as dual decomposition, the
method of multipliers, Douglas-Rachford splitting, Spingarn's
method of partial inverses, Dykstra's alternating projections,
Bregman iterative algorithms for 1 problems, proximal methods, and
others. After briefly surveying the theory and history of the
algorithm, it discusses applications to a wide variety of
statistical and machine learning problems of recent interest,
including the lasso, sparse logistic regression, basis pursuit,
covariance selection, support vector machines, and many others. It
also discusses general distributed optimization, extensions to the
nonconvex setting, and efficient implementation, including some
details on distributed MPI and Hadoop MapReduce implementations
|
You may like...
Loot
Nadine Gordimer
Paperback
(2)
R205
R168
Discovery Miles 1 680
Loot
Nadine Gordimer
Paperback
(2)
R205
R168
Discovery Miles 1 680
|