|
Books > Computing & IT > Applications of computing > Signal processing
 |
Digital Signal Processing
(Paperback)
Joao Marques De Carvalho, Edmar Candeai Gurjao, Luciana Ribeiro Veloso
|
R1,138
R945
Discovery Miles 9 450
Save R193 (17%)
|
Ships in 10 - 15 working days
|
|
Microelectromechanical system (MEMS) inertial sensors have become
ubiquitous in modern society. Built into mobile telephones, gaming
consoles, virtual reality headsets, we use such sensors on a daily
basis. They also have applications in medical therapy devices,
motion- capture filming, traffic monitoring systems, and drones.
While providing accurate measurements over short time scales, this
diminishes over longer periods. To date, this problem has been
resolved by combining them with additional sensors and models. This
adds both expense and size to the devices. This tutorial focuses on
the signal processing aspects of position and orientation
estimation using inertial sensors. It discusses different modelling
choices and a selected number of important algorithms that
engineers can use to select the best options for their designs. The
algorithms include optimization-based smoothing and filtering as
well as computationally cheaper extended Kalman filter and
complementary filter implementations. Engineers, researchers, and
students deploying MEMS inertial sensors will find that this
tutorial is an essential monograph on how to optimize their
designs.
There is a wealth of literature and books available to engineers
starting to understand what machine learning is and how it can be
used in their everyday work. This presents the problem of where the
engineer should start. The answer is often "for a general, but
slightly outdated introduction, read this book; for a detailed
survey of methods based on probabilistic models, check this
reference; to learn about statistical learning, this text is
useful" and so on. This monograph provides the starting point to
the literature that every engineer new to machine learning needs.
It offers a basic and compact reference that describes key ideas
and principles in simple terms and within a unified treatment,
encompassing recent developments and pointers to the literature for
further study. A Brief Introduction to Machine Learning for
Engineers is the entry point to machine learning for students,
practitioners, and researchers with an engineering background in
probability and linear algebra.
Sensors are becoming increasingly omnipresent throughout society.
These sensors generate a billion gigabytes of data every day. With
the availability of immense computing power at central locations,
the local storage and transmission of the data to a central
location becomes the bottleneck in the real-time processing of the
mass of data. Recently compressed sensing has emerged as a
technique to alleviate these problems, but much of the data is
blindly discarded without being examined to achieve acceptable
throughput rates. Sparse Sensing for Statistical Inference
introduces and reviews a new technique called Sparse Sensing that
reduces the amount of data that must be collected to start with,
proving an efficient and cost-effective method for data collection.
This monograph provides the reader with a comprehensive overview of
this technique and a framework that can be used by researchers and
engineers in implementing the technique in practical sensing
systems.
As a major breakthrough in artificial intelligence, deep learning
has achieved impressive success on solving grand challenges in many
fields including speech recognition, natural language processing,
computer vision, image and video processing, and multimedia. This
monograph provides a historical overview of deep learning and
focuses on its applications in object recognition, detection, and
segmentation, which are key challenges of computer vision and have
numerous applications to images and videos. Specifically the topics
covered under object recognition include image classification on
ImageNet, face recognition, and video classification. In detection,
the monograph covers general object detection on ImageNet,
pedestrian detection, face landmark detection (face alignment), and
human landmark detection (pose estimation). Finally, within
segmentation, it covers the most recent progress on scene labeling,
semantic segmentation, face parsing, human parsing, and saliency
detection. Concrete examples of these applications explain the key
points that make deep learning outperform conventional computer
vision systems. Deep Learning in Object Recognition, Detection, and
Segmentation provides a comprehensive introductory overview of a
topic that is having major impact on many areas of research in
signal processing, computer vision, and machine learning. This is a
must-read for students and researchers new to these fields.
Video Coding is the second part of the two-part monograph
Fundamentals of Source and Video Coding by Wiegand and Schwarz.
This part describes the application of the techniques described in
the first part to video coding. In doing so it provides a
description of the fundamentals concepts of video coding and, in
particular, the signal processing in video encoders and decoders.
The human visual system has evolved to have the ability to
selectively focus on the most relevant parts of a visual scene.
This mechanism, referred to as visual attention, has been the focus
of several neurological and psychological studies in the past few
decades. These studies have inspired several computational visual
attention models which have been successfully applied to problems
in computer vision and robotics. Computational Visual Attention
Models provides a comprehensive survey of the state-of-the- art in
computational visual attention modelling with a special focus on
the latest trends. By reviewing several models published since
2012, the theoretical advantages and disadvantages of each approach
are discussed. In addition, existing methodologies to evaluate
computational models through the use of eye-tracking data along
with the visual attention performance metrics used are described.
The shortcomings in existing approaches and approaches to overcome
them are also covered. Finally, a subjective evaluation for
benchmarking existing visual attention metrics is presented and
open problems in visual attention are highlighted. This monograph
provides the reader with an in-depth survey of the research
conducted to date in computational visual attention models and
provides the basis for further research in this exciting area.
Despite the different nature of financial engineering and
electrical engineering, both areas are intimately connected on a
mathematical level. The foundations of financial engineering lie on
the statistical analysis of numerical time series and the modeling
of the behavior of the financial markets in order to perform
predictions and systematically optimize investment strategies.
Similarly, the foundations of electrical engineering, for instance,
wireless communication systems, lie on statistical signal
processing and the modeling of communication channels in order to
perform predictions and systematically optimize transmission
strategies. Both foundations are the same in disguise. It is often
the case in science that the same or very similar methodologies are
developed and applied independently in different areas. A Signal
Processing Perspective of Financial Engineering is about investment
in financial assets treated as a signal processing and optimization
problem. It explores such connections and capitalizes on the
existing mathematical tools developed in wireless communications
and signal processing to solve real-life problems arising in the
financial markets in an unprecedented way. It provides
straightforward and systematic access to financial engineering for
researchers in signal processing and communications so that they
can understand problems in financial engineering more easily and
may even apply signal processing techniques to handle some
financial problems.
H Robust design is an advancing technology which aims to achieve
the system design purpose under intrinsic random fluctuation and
external disturbance. This book introduces several robust design
methods, some of which include linear to nonlinear systems and
frequency to time domain. This book provides not only a complete
theoretical development and application of H robust design over the
last three decades, but also an integrated platform for control,
signal processing, communication, systems and synthetic biology.
Based on the theoretical H robust design results, the authors also
give some practical design examples to illustrate the procedure and
validate the performance of the proposed H method with
computational simulations and tables.
Covariance matrices have found applications in many diverse areas.
These include beamforming in array processing; portfolio analysis
in finance; classification of data and the handling of
high-frequency data. Structured Robust Covariance Estimation
considers the estimation of covariance matrices in non-standard
conditions including heavy-tailed distributions and outlier
contamination. Prior knowledge on the structure of these matrices
is exploited in order to improve the estimation accuracy. The
distributions, structures and algorithms are all based on an
extension of convex optimization to manifolds. It also provides a
self-contained introduction and survey of the theory known as
geodesic convexity. This is a generalized form of convexity
associated with positive definite matrix variables. The fundamental
g-convex sets and functions are detailed, along with the operations
that preserve them, and their application to covariance estimation.
This monograph will be of interest to researchers and students
working in signal processing, statistics and optimization.
This research monograph has the following main theme. Given an
interpolation function, which is supposed to determine an estimate
of the unknown signal value, the reader can use the traditional
approach (traditional interpolation function) to estimate the
numerical value of the signal. Alternatively, the reader can follow
the theoretical developments offered in the book and so design, on
the basis of the unified theory described in the book, three new
interpolation functions with improved approximation capabilities.
That means, that under the unified theory, the book offers three
new classes of interpolation functions with improved approximation
capabilities of the true and unknown signal to estimate. These
works were published in the year 2011 and submitted for peer review
and they are now presented to the public through this new
publication. There are likely to be three main types of readership
of this research monograph. The primary readership is composed of
users of libraries which may adopt the book as reference. The
secondary readership is composed of the population of
instructors/professors of a course in one of applied mathematics,
signal-image interpolation, signal-image processing, biomedical
imaging and/or biomedical engineering academic disciplines. In such
case, the book can be used as an additional educational resource to
be available both to undergraduate and graduate students, in order
to assign homework and/or projects to be included in the
coursework. The tertiary readership is composed of apprentices
and/or passionate of math. In such case, the book would be used to
employ time while following the desire of intellectual enrichment.
The apprentices and/or passionate of math would study the
methodology of the unified theory, would apply the unified theory
such to design new interpolation functions, and should there be the
desire of furthering the interest, the apprentice and/or passionate
would proceed further to the analysis of the results, and possibly
into the discussion and the dissemination of the knowledge made out
of this book.
If information theory and estimation theory are thought of as two
scientific languages, then their key vocabularies are information
measures and estimation measures, respectively. The basic
information measures are entropy, mutual information and relative
entropy. Among the most important estimation measures are mean
square error (MSE) and Fisher information. Playing a paramount role
in information theory and estimation theory, those measures are
akin to mass, force and velocity in classical mechanics, or energy,
entropy and temperature in thermodynamics. The Interplay Between
Information and Estimation Measures is intended as handbook of
known formulas which directly relate to information measures and
estimation measures. It provides intuition and draws connections
between these formulas, highlights some important applications, and
motivates further explorations. The main focus is on such formulas
in the context of the additive Gaussian noise model, with lesser
treatment of others such as the Poisson point process channel. Also
included are a number of new results which are published here for
the first time. Proofs of some basic results are provided, whereas
many more technical proofs already available in the literature are
omitted. In 2004, the authors of this monograph found a general
differential relationship commonly referred to as the I-MMSE
formula. In this book a new, complete proof for the I-MMSE formula
is developed, which includes some technical details omitted in the
original papers relating to this. It concludes by highlighting the
impact of the information-estimation relationships on a variety of
information-theoretic problems of current interest, and provide
some further perspective on their applications.
Deep Learning provides an overview of general deep learning
methodology and its applications to a variety of signal and
information processing tasks. The application areas are chosen with
the following three criteria in mind: (1) expertise or knowledge of
the authors; (2) the application areas that have already been
transformed by the successful use of deep learning technology, such
as speech recognition and computer vision; and (3) the application
areas that have the potential to be impacted significantly by deep
learning and that have been benefitting from recent research
efforts, including natural language and text processing,
information retrieval, and multimodal information processing
empowered by multitask deep learning. This is a timely and
important book for researchers and students with an interest in
deep learning methodology and its applications in signal and
information processing.
This book aims to serve as an introductory text on algebraic coding
theory. The contents are suitable for a final year undergraduate
and beginning graduate course in Electrical Engineering. The
material will give the reader knowledge of coding fundamentals
essential for a deeper understanding of state-of-the-art coding
systems. This book will also serve as a quick reference for
students or practitioners who have not been exposed to the topic,
but need it for specific applications such as cryptography and
communications. The book will cover linear error-correcting block
codes from elementary principles, going through cyclic codes and
then covering some finite field algebra, Goppa codes, algebraic
decoding algorithms and applications in public-key cryptography and
private-key cryptography. Three appendices will cover the Gilbert
bound and some related derivations, a derivation of the Mac
Williamsa identities based on the probability of undetected error,
and the finite field Fourier transform and the Euclidean algorithm
for polynomials."
ArchiMate(R), an Open Group Standard, is an open and independent
modelling language for Enterprise Architecture that is supported by
different tool vendors and consulting firms. ArchiMate provides
instruments to enable enterprise architects to describe, analyze,
and visualize the relationships among business domains in an
unambiguous way. This book provides the official specification of
ArchiMate 2.1 from The Open Group. ArchiMate 2.1 is a maintenance
update to ArchiMate 2.0, addressing comments raised since the
introduction of ArchiMate 2.0 in 2012.The ArchiMate 2.1 Standard
supports modelling throughout the TOGAF(R) Architecture Development
Method (ADM). The intended audience is threefold: * Enterprise
Architecture practitioners, such as architects (e.g. application,
information, process, infrastructure, and, obviously, enterprise
architects), senior and operational management, project leaders,
and anyone committed to work within the reference framework defined
by the Enterprise Architecture. * Those who intend to implement
ArchiMate in a software tool; they will find a complete and
detailed description of the language in this book. * The academic
community, on which we rely for amending and improving the
language, based on state-of-the-art research results in the
enterprise architecture field.
|
You may like...
The Japanese Voter
Scott C Flanagan, Shinsaku Kohei, …
Hardcover
R2,433
Discovery Miles 24 330
|