|
Showing 1 - 4 of
4 matches in All Departments
In recent years, a large amount of multi-disciplinary research has
been conducted on sparse models and their applications. In
statistics and machine learning, the sparsity principle is used to
perform model selection-that is, automatically selecting a simple
model among a large collection of them. In signal processing,
sparse coding consists of representing data with linear
combinations of a few dictionary elements. Subsequently, the
corresponding tools have been widely adopted by several scientific
communities such as neuroscience, bioinformatics, or computer
vision. Sparse Modeling for Image and Vision Processing provides
the reader with a self-contained view of sparse modeling for visual
recognition and image processing. More specifically, the work
focuses on applications where the dictionary is learned and adapted
to data, yielding a compact representation that has been successful
in various contexts. It reviews a large number of applications of
dictionary learning in image processing and computer vision and
presents basic sparse estimation tools. It starts with a historical
tour of sparse estimation in signal processing and statistics,
before moving to more recent concepts such as sparse recovery and
dictionary learning. Subsequently, it shows that dictionary
learning is related to matrix factorization techniques, and that it
is particularly effective for modeling natural image patches. As a
consequence, it has been used for tackling several image processing
problems and is a key component of many state-of-the-art methods in
visual recognition. Sparse Modeling for Image and Vision Processing
concludes with a presentation of optimization techniques that
should make dictionary learning easy to use for researchers that
are not experts in the field.
Submodular functions are relevant to machine learning for at least
two reasons: (1) some problems may be expressed directly as the
optimization of submodular functions, and (2) the Lovasz extension
of submodular functions provides a useful set of regularization
functions for supervised and unsupervised learning. In Learning
with Submodular Functions, the theory of submodular functions is
presented in a self-contained way from a convex analysis
perspective, presenting tight links between certain polyhedra,
combinatorial optimization and convex optimization problems. In
particular, it describes how submodular function minimization is
equivalent to solving a wide variety of convex optimization
problems. This allows the derivation of new efficient algorithms
for approximate and exact submodular function minimization with
theoretical guarantees and good practical performance. By listing
many examples of submodular functions, it reviews various
applications to machine learning, such as clustering, experimental
design, sensor placement, graphical model structure learning or
subset selection, as well as a family of structured
sparsity-inducing norms that can be derived and used from
submodular functions. This is an ideal reference for researchers,
scientists, or engineers with an interest in applying submodular
functions to machine learning problems.
Sparse estimation methods are aimed at using or obtaining
parsimonious representations of data or models. They were first
dedicated to linear variable selection but numerous extensions have
now emerged such as structured sparsity or kernel selection. It
turns out that many of the related estimation problems can be cast
as convex optimization problems by regularizing the empirical risk
with appropriate nonsmooth norms. Optimization with
Sparsity-Inducing Penalties presents optimization tools and
techniques dedicated to such sparsity-inducing penalties from a
general perspective. It covers proximal methods, block-coordinate
descent, reweighted l2-penalized techniques, working-set and
homotopy methods, as well as non-convex formulations and
extensions, and provides an extensive set of experiments to compare
various algorithms from a computational point of view. The
presentation of Optimization with Sparsity-Inducing Penalties is
essentially based on existing literature, but the process of
constructing a general framework leads naturally to new results,
connections and points of view. It is an ideal reference on the
topic for anyone working in machine learning and related areas.
An up-to-date account of the interplay between optimization and
machine learning, accessible to students and researchers in both
communities. The interplay between optimization and machine
learning is one of the most important developments in modern
computational science. Optimization formulations and methods are
proving to be vital in designing algorithms to extract essential
knowledge from huge volumes of data. Machine learning, however, is
not simply a consumer of optimization technology but a rapidly
evolving field that is itself generating new optimization ideas.
This book captures the state of the art of the interaction between
optimization and machine learning in a way that is accessible to
researchers in both fields. Optimization approaches have enjoyed
prominence in machine learning because of their wide applicability
and attractive theoretical properties. The increasing complexity,
size, and variety of today's machine learning models call for the
reassessment of existing assumptions. This book starts the process
of reassessment. It describes the resurgence in novel contexts of
established frameworks such as first-order methods, stochastic
approximations, convex relaxations, interior-point methods, and
proximal methods. It also devotes attention to newer themes such as
regularized optimization, robust optimization, gradient and
subgradient methods, splitting techniques, and second-order
methods. Many of these techniques draw inspiration from other
fields, including operations research, theoretical computer
science, and subfields of optimization. The book will enrich the
ongoing cross-fertilization between the machine learning community
and these other fields, and within the broader optimization
community.
|
You may like...
Loot
Nadine Gordimer
Paperback
(2)
R205
R168
Discovery Miles 1 680
Loot
Nadine Gordimer
Paperback
(2)
R205
R168
Discovery Miles 1 680
|