|
Showing 1 - 2 of
2 matches in All Departments
Sparse estimation methods are aimed at using or obtaining
parsimonious representations of data or models. They were first
dedicated to linear variable selection but numerous extensions have
now emerged such as structured sparsity or kernel selection. It
turns out that many of the related estimation problems can be cast
as convex optimization problems by regularizing the empirical risk
with appropriate nonsmooth norms. Optimization with
Sparsity-Inducing Penalties presents optimization tools and
techniques dedicated to such sparsity-inducing penalties from a
general perspective. It covers proximal methods, block-coordinate
descent, reweighted l2-penalized techniques, working-set and
homotopy methods, as well as non-convex formulations and
extensions, and provides an extensive set of experiments to compare
various algorithms from a computational point of view. The
presentation of Optimization with Sparsity-Inducing Penalties is
essentially based on existing literature, but the process of
constructing a general framework leads naturally to new results,
connections and points of view. It is an ideal reference on the
topic for anyone working in machine learning and related areas.
An up-to-date account of the interplay between optimization and
machine learning, accessible to students and researchers in both
communities. The interplay between optimization and machine
learning is one of the most important developments in modern
computational science. Optimization formulations and methods are
proving to be vital in designing algorithms to extract essential
knowledge from huge volumes of data. Machine learning, however, is
not simply a consumer of optimization technology but a rapidly
evolving field that is itself generating new optimization ideas.
This book captures the state of the art of the interaction between
optimization and machine learning in a way that is accessible to
researchers in both fields. Optimization approaches have enjoyed
prominence in machine learning because of their wide applicability
and attractive theoretical properties. The increasing complexity,
size, and variety of today's machine learning models call for the
reassessment of existing assumptions. This book starts the process
of reassessment. It describes the resurgence in novel contexts of
established frameworks such as first-order methods, stochastic
approximations, convex relaxations, interior-point methods, and
proximal methods. It also devotes attention to newer themes such as
regularized optimization, robust optimization, gradient and
subgradient methods, splitting techniques, and second-order
methods. Many of these techniques draw inspiration from other
fields, including operations research, theoretical computer
science, and subfields of optimization. The book will enrich the
ongoing cross-fertilization between the machine learning community
and these other fields, and within the broader optimization
community.
|
|