![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Signal processing
This book provides various speech enhancement algorithms for digital hearing aids. It covers information on noise signals extracted from silences of speech signal. The description of the algorithm used for this purpose is also provided. Different types of adaptive filters such as Least Mean Squares (LMS), Normalized LMS (NLMS) and Recursive Lease Squares (RLS) are described for noise reduction in the speech signals. Different types of noises are taken to generate noisy speech signals, and therefore information on various noises signals is provided. The comparative performance of various adaptive filters for noise reduction in speech signals is also described. In addition, the book provides a speech enhancement technique using adaptive filtering and necessary frequency strength enhancement using wavelet transform as per the requirement of audiogram for digital hearing aids. Presents speech enhancement techniques for improving performance of digital hearing aids; Covers various types of adaptive filters and their advantages and limitations; Provides a hybrid speech enhancement technique using wavelet transform and adaptive filters.
This book addresses the challenges and design trade-offs arising during the hardware design of Faster-than-Nyquist (FTN) signaling transceivers. The authors describe how to design for coexistence between the FTN system described and Orthogonal frequency-division multiplexing (OFDM) systems, enabling readers to design FTN specific processing blocks as add-ons to the conventional transceiver chain. Provides a comprehensive introduction to Faster-than-Nyquist
(FTN) signaling transceivers, covering both theory and hardware
implementation;
Advancements in digital sensor technology, digital image analysis techniques, as well as computer software and hardware have brought together the fields of computer vision and photogrammetry, which are now converging towards sharing, to a great extent, objectives and algorithms. The potential for mutual benefits by the close collaboration and interaction of these two disciplines is great, as photogrammetric know-how can be aided by the most recent image analysis developments in computer vision, while modern quantitative photogrammetric approaches can support computer vision activities. Devising methodologies for automating the extraction of man-made objects (e.g. buildings, roads) from digital aerial or satellite imagery is an application where this cooperation and mutual support is already reaping benefits. The valuable spatial information collected using these interdisciplinary techniques is of improved qualitative and quantitative accuracy. This book offers a comprehensive selection of high-quality and in-depth contributions from world-wide leading research institutions, treating theoretical as well as implementational issues, and representing the state-of-the-art on this subject among the photogrammetric and computer vision communities.
This graduate-level text provides a language for understanding, unifying, and implementing a wide variety of algorithms for digital signal processing - in particular, to provide rules and procedures that can simplify or even automate the task of writing code for the newest parallel and vector machines. It thus bridges the gap between digital signal processing algorithms and their implementation on a variety of computing platforms. The mathematical concept of tensor product is a recurring theme throughout the book, since these formulations highlight the data flow, which is especially important on supercomputers. Because of their importance in many applications, much of the discussion centres on algorithms related to the finite Fourier transform and to multiplicative FFT algorithms.
129 6.2 Representation of hints. 131 6.3 Monotonicity hints .. . 134 6.4 Theory ......... . 139 6.4.1 Capacity results 140 6.4.2 Decision boundaries 144 6.5 Conclusion 145 6.6 References....... ... 146 7 Analysis and Synthesis Tools for Robust SPRness 147 C. Mosquera, J.R. Hernandez, F. Perez-Gonzalez 7.1 Introduction.............. 147 7.2 SPR Analysis of Uncertain Systems. 153 7.2.1 The Poly topic Case . 155 7.2.2 The ZP-Ball Case ...... . 157 7.2.3 The Roots Space Case ... . 159 7.3 Synthesis of LTI Filters for Robust SPR Problems 161 7.3.1 Algebraic Design for Two Plants ..... . 161 7.3.2 Algebraic Design for Three or More Plants 164 7.3.3 Approximate Design Methods. 165 7.4 Experimental results 167 7.5 Conclusions 168 7.6 References ..... . 169 8 Boundary Methods for Distribution Analysis 173 J.L. Sancho et aZ. 8.1 Introduction ............. . 173 8.1.1 Building a Classifier System . 175 8.2 Motivation ............. . 176 8.3 Boundary Methods as Feature-Set Evaluation 177 8.3.1 Results ................ . 179 8.3.2 Feature Set Evaluation using Boundary Methods: S- mary. . . . . . . . . . . . . . . . . . . .. . . 182 . . .
Despite their novelty, wavelets have a tremendous impact on a number of modern scientific disciplines, particularly on signal and image analysis. Because of their powerful underlying mathematical theory, they offer exciting opportunities for the design of new multi-resolution processing algorithms and effective pattern recognition systems. This book provides a much-needed overview of current trends in the practical application of wavelet theory. It combines cutting edge research in the rapidly developing wavelet theory with ideas from practical signal and image analysis fields. Subjects dealt with include balanced discussions on wavelet theory and its specific application in diverse fields, ranging from data compression to seismic equipment. In addition, the book offers insights into recent advances in emerging topics such as double density DWT, multiscale Bayesian estimation, symmetry and locality in image representation, and image fusion. Audience: This volume will be of interest to graduate students and researchers whose work involves acoustics, speech, signal and image processing, approximations and expansions, Fourier analysis, and medical imaging.
The goal of this book is to provide, for the first time, a reference to the most relevant applications of adaptive filtering techniques. Top researchers in the field contributed chapters addressing their specific topic of study. The topics are limited to acoustics, speech, wireless, and networking applications where research is still very active and open. The book is roughly organized into two parts. In the first part, several applications in acoustics and speech are developed. The second part focuses on wireless and networking applications. Some chapters are tutorial in nature ,while others present new research ideas, and all have in common, the use of adaptive algorithms to solve real-world problems.
In his paper Theory of Communication [Gab46], D. Gabor proposed the use of a family of functions obtained from one Gaussian by time-and frequency shifts. Each of these is well concentrated in time and frequency; together they are meant to constitute a complete collection of building blocks into which more complicated time-depending functions can be decomposed. The application to communication proposed by Gabor was to send the coeffi cients of the decomposition into this family of a signal, rather than the signal itself. This remained a proposal-as far as I know there were no seri ous attempts to implement it for communication purposes in practice, and in fact, at the critical time-frequency density proposed originally, there is a mathematical obstruction; as was understood later, the family of shifted and modulated Gaussians spans the space of square integrable functions [BBGK71, Per71] (it even has one function to spare [BGZ75] . . . ) but it does not constitute what we now call a frame, leading to numerical insta bilities. The Balian-Low theorem (about which the reader can find more in some of the contributions in this book) and its extensions showed that a similar mishap occurs if the Gaussian is replaced by any other function that is "reasonably" smooth and localized. One is thus led naturally to considering a higher time-frequency density.
Acquiring spatial data for geoinformation systems is still mainly done by human operators who analyze images using classical photogrammetric equipment or digitize maps, possibly assisted by some low level image processing. Automation of these tasks is difficult due to the complexity of the object, the topography, and the deficiency of current pattern recognition and image analysis tools for achieving a reliable transition from the data to the high level description of topographic objects. It appears that progress in automation only can be achieved by incorporating domain-specific semantic models into the analysis procedures. This volume collects papers which were presented at the Workshop "SMATI '97." The workshop focused on "Semantic Modeling for the Acquisition of Topographic Information from Images and Maps." This volume offers a comprehensive selection of high-quality and in-depth contributions by experts of the field coming from leading research institutes, treating both theoretical and implementation issues and integrating aspects of photogrammetry, cartography, computer vision, and image understanding.
Dimensions of Uncertainty in Communication Engineering is a comprehensive and self-contained introduction to the problems of nonaleatory uncertainty and the mathematical tools needed to solve them. The book gathers together tools derived from statistics, information theory, moment theory, interval analysis and probability boxes, dependence bounds, nonadditive measures, and Dempster-Shafer theory. While the book is mainly devoted to communication engineering, the techniques described are also of interest to other application areas, and commonalities to these are often alluded to through a number of references to books and research papers. This is an ideal supplementary book for courses in wireless communications, providing techniques for addressing epistemic uncertainty, as well as an important resource for researchers and industry engineers. Students and researchers in other fields such as statistics, financial mathematics, and transport theory will gain an overview and understanding on these methods relevant to their field.
For undergraduate-level courses in Signals and Systems. This comprehensive exploration of signals and systems develops continuous-time and discrete-time concepts/methods in parallel -- highlighting the similarities and differences -- and features introductory treatments of the applications of these basic methods in such areas as filtering, communication, sampling, discrete-time processing of continuous-time signals, and feedback. Relatively self-contained, the text assumes no prior experience with system analysis, convolution, Fourier analysis, or Laplace and z-transforms.
This book reviews cutting-edge developments in neural signalling processing (NSP), systematically introducing readers to various models and methods in the context of NSP. Neuronal Signal Processing is a comparatively new field in computer sciences and neuroscience, and is rapidly establishing itself as an important tool, one that offers an ideal opportunity to forge stronger links between experimentalists and computer scientists. This new signal-processing tool can be used in conjunction with existing computational tools to analyse neural activity, which is monitored through different sensors such as spike trains, local filed potentials and EEG. The analysis of neural activity can yield vital insights into the function of the brain. This book highlights the contribution of signal processing in the area of computational neuroscience by providing a forum for researchers in this field to share their experiences to date.
This "bible" of a whole generation of communications engineers was
originally published in 1958. The focus is on the statistical
theory underlying the study of signals and noises in communications
systems, emphasizing techniques as well s results. End of chapter
problems are provided.
This book describes an ECG processing architecture that guides biomedical SoC developers, from theory to implementation and testing. The authors provide complete coverage of the digital circuit implementation of an ultra-low power biomedical SoC, comprised of a detailed description of an ECG processor implemented and fabricated on chip. Coverage also includes the challenges and tradeoffs of designing ECG processors. Describes digital circuit architecture for implementing ECG processing algorithms on chip; Includes coverage of signal processing techniques for ECG processing; Features ultra-low power circuit design techniques; Enables design of ECG processing architectures and their respective on-chip implementation.
In this book signals or images described by functions whose number of arguments varies from one to five are considered. This arguments can be time, spatial dimensions, or wavelength in a polychromatic signal. The book discusses the basics of mathematical models of signals, their transformations in technical pre-processing systems, and criteria of the systems quality. The models are used for the solution of practical tasks of system analysis, measurement and optimization, and signal restoration. Several examples are given.
Partial Contents: Reliability Concepts; Device Reliability; Hazard Rates; Monitoring Reliability; Specific Device Information, and more. Appendixes. 60 illustrations.
Dynamic logic (DL) recently had a highest impact on the development in several areas of modeling and algorithm design. The book discusses classical algorithms used for 30 to 50 years (where improvements are often measured by signal-to-clutter ratio), and also new areas, which did not previously exist. These achievements were recognized by National and International awards. Emerging areas include cognitive, emotional, intelligent systems, data mining, modeling of the mind, higher cognitive functions, evolution of languages and other. Classical areas include detection, recognition, tracking, fusion, prediction, inverse scattering, and financial prediction. All these classical areas are extended to using mixture models, which previously was considered unsolvable in most cases. Recent neuroimaging experiments proved that the brain-mind actually uses DL. Emotional Cognitive Neural Algorithms with Engineering Applications" is written for professional scientists and engineers developing computer and information systems, for professors teaching modeling and algorithms, and for students working on Masters and Ph.D. degrees in these areas. The book will be of interest to psychologists and neuroscientists interested in mathematical models of the brain and min das well. "
This new book from Richard Klemm, author of the highly successful Principles of Space-time Adaptive Processing (IEE,2002), examines the various applications of space-time adaptive processing including applications in OTH-radar, ground target tracking, STAP in real world clutter environments, jammer cancellation, superresolution, active sonar, seismics and communications. Including contributions from distinguished international authors, the book provides a unique overview of the field of space-time procesing. The book is divided in two parts; the first dealing with the classical adaptive suppression of airbourne and space based radar clutter and the second comprising of miscellaneous applications in other fields such as communications, underwater sound and seismics. The book will be of interest to those working in the field of sensor signal processing and in particular postgraduate students, research scientists, system engineers, university teachers and research project managers.
Authored by engineers for engineers, this book is designed to be a practical and easy-to-understand solution sourcebook for real-world high-resolution and spot-light SAR image processing. Widely-used algorithms are presented for both system errors and propagation phenomena as well as numerous formerly-classified image examples. As well as providing the details of digital processor implementation, the text presents the polar format algorithm and two modern algorithms for spot-light image formation processing - the range migration algorithm and the chirp scaling algorithm. Bearing practical needs in mind, the authors have included an entire chapter devoted to SAR system performance including image quality metrics and image quality assessment. Another chapter contains image formation processor design examples for two operational fine-resolution SAR systems. This is a reference for radar engineers, managers, system developers, and for students in high-resolution microwave imaging courses. It includes 662 equations, 265 figures, and 55 tables.
This book presents state-of-the-art techniques for radiation hardened high-resolution Time-to-Digital converters and low noise frequency synthesizers. Throughout the book, advanced degradation mechanisms and error sources are discussed and several ways to prevent such errors are presented. An overview of the prerequisite physics of nuclear interactions is given that has been compiled in an easy to understand chapter. The book is structured in a way that different hardening techniques and solutions are supported by theory and experimental data with their various tradeoffs. Based on leading-edge research, conducted in collaboration between KU Leuven and CERN, the European Center for Nuclear Research Describes in detail advanced techniques to harden circuits against ionizing radiation Provides a practical way to learn and understand radiation effects in time-based circuits Includes an introduction to the underlying physics, circuit design, and advanced techniques accompanied with experimental data
Document imaging is a new discipline in applied computer science. It is building bridges between computer graphics, the world of prepress and press, and the areas of color vision and color reproduction. The focus of this book is of special relevance to people learning how to utilize and integrate such available technology as digital printing or short run color, how to make use of CIM techniques for print products, and how to evaluate related technologies that will become relevant in the next few years. This book is the first to give a comprehensive overview of document imaging, the areas involved, and how they relate. For readers with a background in computer graphics it gives insight into all problems related to putting information in print, a field only very thinly covered in textbooks on computer graphics.
The aim of this volume is to bring together research directions in theoretical signal and imaging processing developed rather independently in electrical engineering, theoretical physics, mathematics and the computer sciences. In particular, mathematically justified algorithms and methods, the mathematical analysis of these algorithms, and methods as well as the investigation of connections between methods from time series analysis and image processing are reviewed. An interdisciplinary comparison of these methods, drawing upon common sets of test problems from medicine and geophysical/environmental sciences, is also addressed. This volume coherently summarizes work carried out in the field of theoretical signal and image processing. It focuses on non-linear and non-parametric models for time series as well as on adaptive methods in image processing.
Fourier analysis is one of the most useful tools in many applied sciences. The recent developments of wavelet analysis indicate that in spite of its long history and well-established applications, the field is still one of active research. This text bridges the gap between engineering and mathematics, providing a rigorously mathematical introduction of Fourier analysis, wavelet analysis and related mathematical methods, while emphasizing their uses in signal processing and other applications in communications engineering. The interplay between Fourier series and Fourier transforms is at the heart of signal processing, which is couched most naturally in terms of the Dirac delta function and Lebesgue integrals. The exposition is organized into four parts. The first is a discussion of one-dimensional Fourier theory, including the classical results on convergence and the Poisson sum formula. The second part is devoted to the mathematical foundations of signal processing ¿ sampling, filtering, digital signal processing. Fourier analysis in Hilbert spaces is the focus of the third part, and the last part provides an introduction to wavelet analysis, time-frequency issues, and multiresolution analysis. An appendix provides the necessary background on Lebesgue integrals. |
You may like...
The Office of the Holy Communion in the…
Edward Meyrick Goulburn
Paperback
R605
Discovery Miles 6 050
Essential Java for Scientists and…
Brian Hahn, Katherine Malan
Paperback
R1,266
Discovery Miles 12 660
|