|
Books > Computing & IT > Applications of computing > Signal processing
Microelectromechanical system (MEMS) inertial sensors have become
ubiquitous in modern society. Built into mobile telephones, gaming
consoles, virtual reality headsets, we use such sensors on a daily
basis. They also have applications in medical therapy devices,
motion- capture filming, traffic monitoring systems, and drones.
While providing accurate measurements over short time scales, this
diminishes over longer periods. To date, this problem has been
resolved by combining them with additional sensors and models. This
adds both expense and size to the devices. This tutorial focuses on
the signal processing aspects of position and orientation
estimation using inertial sensors. It discusses different modelling
choices and a selected number of important algorithms that
engineers can use to select the best options for their designs. The
algorithms include optimization-based smoothing and filtering as
well as computationally cheaper extended Kalman filter and
complementary filter implementations. Engineers, researchers, and
students deploying MEMS inertial sensors will find that this
tutorial is an essential monograph on how to optimize their
designs.
The human visual system has evolved to have the ability to
selectively focus on the most relevant parts of a visual scene.
This mechanism, referred to as visual attention, has been the focus
of several neurological and psychological studies in the past few
decades. These studies have inspired several computational visual
attention models which have been successfully applied to problems
in computer vision and robotics. Computational Visual Attention
Models provides a comprehensive survey of the state-of-the- art in
computational visual attention modelling with a special focus on
the latest trends. By reviewing several models published since
2012, the theoretical advantages and disadvantages of each approach
are discussed. In addition, existing methodologies to evaluate
computational models through the use of eye-tracking data along
with the visual attention performance metrics used are described.
The shortcomings in existing approaches and approaches to overcome
them are also covered. Finally, a subjective evaluation for
benchmarking existing visual attention metrics is presented and
open problems in visual attention are highlighted. This monograph
provides the reader with an in-depth survey of the research
conducted to date in computational visual attention models and
provides the basis for further research in this exciting area.
Audio Content Security: Attack Analysis on Audio Watermarking
describes research using a common audio watermarking method for
four different genres of music, also providing the results of many
test attacks to determine the robustness of the watermarking in the
face of those attacks. The results of this study can be used for
further studies and to establish the need to have a particular way
of audio watermarking for each particular group of songs, each with
different characteristics. An additional aspect of this study tests
and analyzes two parameters of audio host file and watermark on a
specific evaluation method (PSNR) for audio watermarking.
Sensors are becoming increasingly omnipresent throughout society.
These sensors generate a billion gigabytes of data every day. With
the availability of immense computing power at central locations,
the local storage and transmission of the data to a central
location becomes the bottleneck in the real-time processing of the
mass of data. Recently compressed sensing has emerged as a
technique to alleviate these problems, but much of the data is
blindly discarded without being examined to achieve acceptable
throughput rates. Sparse Sensing for Statistical Inference
introduces and reviews a new technique called Sparse Sensing that
reduces the amount of data that must be collected to start with,
proving an efficient and cost-effective method for data collection.
This monograph provides the reader with a comprehensive overview of
this technique and a framework that can be used by researchers and
engineers in implementing the technique in practical sensing
systems.
As a major breakthrough in artificial intelligence, deep learning
has achieved impressive success on solving grand challenges in many
fields including speech recognition, natural language processing,
computer vision, image and video processing, and multimedia. This
monograph provides a historical overview of deep learning and
focuses on its applications in object recognition, detection, and
segmentation, which are key challenges of computer vision and have
numerous applications to images and videos. Specifically the topics
covered under object recognition include image classification on
ImageNet, face recognition, and video classification. In detection,
the monograph covers general object detection on ImageNet,
pedestrian detection, face landmark detection (face alignment), and
human landmark detection (pose estimation). Finally, within
segmentation, it covers the most recent progress on scene labeling,
semantic segmentation, face parsing, human parsing, and saliency
detection. Concrete examples of these applications explain the key
points that make deep learning outperform conventional computer
vision systems. Deep Learning in Object Recognition, Detection, and
Segmentation provides a comprehensive introductory overview of a
topic that is having major impact on many areas of research in
signal processing, computer vision, and machine learning. This is a
must-read for students and researchers new to these fields.
Video Coding is the second part of the two-part monograph
Fundamentals of Source and Video Coding by Wiegand and Schwarz.
This part describes the application of the techniques described in
the first part to video coding. In doing so it provides a
description of the fundamentals concepts of video coding and, in
particular, the signal processing in video encoders and decoders.
Despite the different nature of financial engineering and
electrical engineering, both areas are intimately connected on a
mathematical level. The foundations of financial engineering lie on
the statistical analysis of numerical time series and the modeling
of the behavior of the financial markets in order to perform
predictions and systematically optimize investment strategies.
Similarly, the foundations of electrical engineering, for instance,
wireless communication systems, lie on statistical signal
processing and the modeling of communication channels in order to
perform predictions and systematically optimize transmission
strategies. Both foundations are the same in disguise. It is often
the case in science that the same or very similar methodologies are
developed and applied independently in different areas. A Signal
Processing Perspective of Financial Engineering is about investment
in financial assets treated as a signal processing and optimization
problem. It explores such connections and capitalizes on the
existing mathematical tools developed in wireless communications
and signal processing to solve real-life problems arising in the
financial markets in an unprecedented way. It provides
straightforward and systematic access to financial engineering for
researchers in signal processing and communications so that they
can understand problems in financial engineering more easily and
may even apply signal processing techniques to handle some
financial problems.
Covariance matrices have found applications in many diverse areas.
These include beamforming in array processing; portfolio analysis
in finance; classification of data and the handling of
high-frequency data. Structured Robust Covariance Estimation
considers the estimation of covariance matrices in non-standard
conditions including heavy-tailed distributions and outlier
contamination. Prior knowledge on the structure of these matrices
is exploited in order to improve the estimation accuracy. The
distributions, structures and algorithms are all based on an
extension of convex optimization to manifolds. It also provides a
self-contained introduction and survey of the theory known as
geodesic convexity. This is a generalized form of convexity
associated with positive definite matrix variables. The fundamental
g-convex sets and functions are detailed, along with the operations
that preserve them, and their application to covariance estimation.
This monograph will be of interest to researchers and students
working in signal processing, statistics and optimization.
This book provides a rigorous treatment of deterministic and random
signals. It offers detailed information on topics including random
signals, system modelling and system analysis. System analysis in
frequency domain using Fourier transform and Laplace transform is
explained with theory and numerical problems. The advanced
techniques used for signal processing, especially for speech and
image processing, are discussed. The properties of continuous time
and discrete time signals are explained with a number of numerical
problems. The physical significance of different properties is
explained using real-life examples. To aid understanding, concept
check questions, review questions, a summary of important concepts,
and frequently asked questions are included. MATLAB programs, with
output plots and simulation examples, are provided for each
concept. Students can execute these simulations and verify the
outputs.
In recent years, a large amount of multi-disciplinary research has
been conducted on sparse models and their applications. In
statistics and machine learning, the sparsity principle is used to
perform model selection-that is, automatically selecting a simple
model among a large collection of them. In signal processing,
sparse coding consists of representing data with linear
combinations of a few dictionary elements. Subsequently, the
corresponding tools have been widely adopted by several scientific
communities such as neuroscience, bioinformatics, or computer
vision. Sparse Modeling for Image and Vision Processing provides
the reader with a self-contained view of sparse modeling for visual
recognition and image processing. More specifically, the work
focuses on applications where the dictionary is learned and adapted
to data, yielding a compact representation that has been successful
in various contexts. It reviews a large number of applications of
dictionary learning in image processing and computer vision and
presents basic sparse estimation tools. It starts with a historical
tour of sparse estimation in signal processing and statistics,
before moving to more recent concepts such as sparse recovery and
dictionary learning. Subsequently, it shows that dictionary
learning is related to matrix factorization techniques, and that it
is particularly effective for modeling natural image patches. As a
consequence, it has been used for tackling several image processing
problems and is a key component of many state-of-the-art methods in
visual recognition. Sparse Modeling for Image and Vision Processing
concludes with a presentation of optimization techniques that
should make dictionary learning easy to use for researchers that
are not experts in the field.
The modern world of ubiquitous communication devices has fueled
recent research into the need to find technical solutions to
address energy consumption concerns raised by various stakeholders.
These include: The exponential increase of connected devices that
wireless communications have been experiencing poses serious
sustainable growth concerns. The rapid expansion of wireless
networks causes environmental concerns. Economic concerns drive the
development of novel energy-efficient ICT. This monograph focuses
on energy-efficient wireless network design, including resource
allocation, scheduling, precoding, relaying, and decoding. Starting
from simple point-to-point (P2P) systems and then gradually moving
towards more complex interference networks, the energy efficiency
is defined and its properties characterized. The authors show how
the energy efficiency is naturally defined by fractional functions,
thus establishing that a key role in the modeling, analysis, and
optimization of energy efficiency is played by fractional
programming; a branch of optimization theory specifically concerned
with the properties and optimization of fractional functions. The
monograph introduces fractional programming theory, and illustrates
how it can be used to formulate and handle energy efficiency
optimization problems. It provides a comprehensive introduction to
the theoretical and practical aspects of these problems and
describes the solutions offered with this technique. It will be of
use to all researchers and engineers working on modern
communication systems.
The proliferation of social media such as real time microblogging
and online reputation systems facilitate real time sensing of
social patterns and behavior. In the last decade, sensing and
decision making in social networks have witnessed significant
progress in the electrical engineering, computer science,
economics, finance, and sociology research communities. Research in
this area involves the interaction of dynamic random graphs,
socio-economic analysis, and statistical inference algorithms.
Interactive Sensing and Decision Making in Social Networks provides
a survey, tutorial development, and discussion of four highly
stylized examples of sensing and decision making in social
networks: social learning for interactive sensing; tracking the
degree distribution of social networks; sensing and information
diffusion; and coordination of decision making via game-theoretic
learning. Each of the four examples is motivated by practical
examples, and comprises of a literature survey together with
careful problem formulation and mathematical analysis. Despite
being highly stylized, these examples provide a rich variety of
models, algorithms and analysis tools that are readily accessible
to a signal processing, control/systems theory, and applied
mathematics audience.
This research monograph has the following main theme. Given an
interpolation function, which is supposed to determine an estimate
of the unknown signal value, the reader can use the traditional
approach (traditional interpolation function) to estimate the
numerical value of the signal. Alternatively, the reader can follow
the theoretical developments offered in the book and so design, on
the basis of the unified theory described in the book, three new
interpolation functions with improved approximation capabilities.
That means, that under the unified theory, the book offers three
new classes of interpolation functions with improved approximation
capabilities of the true and unknown signal to estimate. These
works were published in the year 2011 and submitted for peer review
and they are now presented to the public through this new
publication. There are likely to be three main types of readership
of this research monograph. The primary readership is composed of
users of libraries which may adopt the book as reference. The
secondary readership is composed of the population of
instructors/professors of a course in one of applied mathematics,
signal-image interpolation, signal-image processing, biomedical
imaging and/or biomedical engineering academic disciplines. In such
case, the book can be used as an additional educational resource to
be available both to undergraduate and graduate students, in order
to assign homework and/or projects to be included in the
coursework. The tertiary readership is composed of apprentices
and/or passionate of math. In such case, the book would be used to
employ time while following the desire of intellectual enrichment.
The apprentices and/or passionate of math would study the
methodology of the unified theory, would apply the unified theory
such to design new interpolation functions, and should there be the
desire of furthering the interest, the apprentice and/or passionate
would proceed further to the analysis of the results, and possibly
into the discussion and the dissemination of the knowledge made out
of this book.
If information theory and estimation theory are thought of as two
scientific languages, then their key vocabularies are information
measures and estimation measures, respectively. The basic
information measures are entropy, mutual information and relative
entropy. Among the most important estimation measures are mean
square error (MSE) and Fisher information. Playing a paramount role
in information theory and estimation theory, those measures are
akin to mass, force and velocity in classical mechanics, or energy,
entropy and temperature in thermodynamics. The Interplay Between
Information and Estimation Measures is intended as handbook of
known formulas which directly relate to information measures and
estimation measures. It provides intuition and draws connections
between these formulas, highlights some important applications, and
motivates further explorations. The main focus is on such formulas
in the context of the additive Gaussian noise model, with lesser
treatment of others such as the Poisson point process channel. Also
included are a number of new results which are published here for
the first time. Proofs of some basic results are provided, whereas
many more technical proofs already available in the literature are
omitted. In 2004, the authors of this monograph found a general
differential relationship commonly referred to as the I-MMSE
formula. In this book a new, complete proof for the I-MMSE formula
is developed, which includes some technical details omitted in the
original papers relating to this. It concludes by highlighting the
impact of the information-estimation relationships on a variety of
information-theoretic problems of current interest, and provide
some further perspective on their applications.
Deep Learning provides an overview of general deep learning
methodology and its applications to a variety of signal and
information processing tasks. The application areas are chosen with
the following three criteria in mind: (1) expertise or knowledge of
the authors; (2) the application areas that have already been
transformed by the successful use of deep learning technology, such
as speech recognition and computer vision; and (3) the application
areas that have the potential to be impacted significantly by deep
learning and that have been benefitting from recent research
efforts, including natural language and text processing,
information retrieval, and multimodal information processing
empowered by multitask deep learning. This is a timely and
important book for researchers and students with an interest in
deep learning methodology and its applications in signal and
information processing.
ArchiMate(R), an Open Group Standard, is an open and independent
modelling language for Enterprise Architecture that is supported by
different tool vendors and consulting firms. ArchiMate provides
instruments to enable enterprise architects to describe, analyze,
and visualize the relationships among business domains in an
unambiguous way. This book provides the official specification of
ArchiMate 2.1 from The Open Group. ArchiMate 2.1 is a maintenance
update to ArchiMate 2.0, addressing comments raised since the
introduction of ArchiMate 2.0 in 2012.The ArchiMate 2.1 Standard
supports modelling throughout the TOGAF(R) Architecture Development
Method (ADM). The intended audience is threefold: * Enterprise
Architecture practitioners, such as architects (e.g. application,
information, process, infrastructure, and, obviously, enterprise
architects), senior and operational management, project leaders,
and anyone committed to work within the reference framework defined
by the Enterprise Architecture. * Those who intend to implement
ArchiMate in a software tool; they will find a complete and
detailed description of the language in this book. * The academic
community, on which we rely for amending and improving the
language, based on state-of-the-art research results in the
enterprise architecture field.
|
|