![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Professional & Technical > Electronics & communications engineering > Electronics engineering > Applied optics > General
It is a great pleasure to be asked to write the Preface for this book on trellis decoding of error correcting block codes. The subject is extremely significant both theoretically and practically, and is very timely because of recent devel opments in the microelectronic implementation and range of application of error-control coding systems based on block codes. The authors have been notably active in signal processing and coding research and development for several years, and therefore very well placed to contribute to the state of the art on the subject of trellis decoding. In particular, the book represents a unique approach to many practical aspects of the topic. As the authors point out, there are two main classes of error control codes: block codes and convolutinal codes. Block codes came first historically and have a well-developed mathematical structure. Convolutional codes come later, and have developed heuristically, though a more formal treatment has emerged via recent developments in the theory of symbolic dynamics. Max imum likelihood (ML) decoding of powerful codes in both these classes is computationally complex in the general case; that is, ML decoding fails into the class of NP-hard computational problems. This arieses because the de coding complexity is an exponential function of key parameters of the code."
This monograph explores Intrabody communication (IBC) as a novel non-RF wireless data communication technique using the human body itself as the communication channel or transmission medium. In particular, the book investigates Intrabody Communication considering limb joint effects within the transmission frequency range 0.3-200 MHz. Based on in-vivo experiments which determine the effects of size, situations, and locations of joints on the IBC, the book proposes a new IBC circuit model explaining elbow joint effects. This model not only takes the limb joint effects of the body into account but also considers the influence of measurement equipment in higher frequency band thus predicting signal attenuation behavior over wider frequency ranges. Finally, this work proposes transmitter and receiver architectures for intrabody communication. A carrier-free scheme based on impulse radio for the IBC is implemented on a FPGA.
This book is a compendium of the finest research in nanoplasmonic sensing done around the world in the last decade. It describes basic theoretical considerations of nanoplasmons in the dielectric environment, gives examples of the multitude of applications of nanoplasmonics in biomedical and chemical sensing, and provides an overview of future trends in optical and non-optical nanoplasmonic sensing. Specifically, readers are guided through both the fundamentals and the latest research in the two major fields nanoplasmonic sensing is applied to - bio- and chemo-sensing - then given the state-of-the-art recipes used in nanoplasmonic sensing research.
Traditional Pattern Recognition (PR) and Computer Vision (CV) technologies have mainly focused on full automation, even though full automation often proves elusive or unnatural in many applications, where the technology is expected to assist rather than replace the human agents. However, not all the problems can be automatically solved being the human interaction the only way to tackle those applications. Recently, multimodal human interaction has become an important field of increasing interest in the research community. Advanced man-machine interfaces with high cognitive capabilities are a hot research topic that aims at solving challenging problems in image and video applications. Actually, the idea of computer interactive systems was already proposed on the early stages of computer science. Nowadays, the ubiquity of image sensors together with the ever-increasing computing performance has open new and challenging opportunities for research in multimodal human interaction. This book aims to show how existing PR and CV technologies can naturally evolve using this new paradigm. The chapters of this book show different successful case studies of multimodal interactive technologies for both image and video applications. They cover a wide spectrum of applications, ranging from interactive handwriting transcriptions to human-robot interactions in real environments.
This book presents advances in matrix and tensor data processing in
the domain of signal, image and information processing. The
theoretical mathematical approaches are discusses in the context of
potential applications in sensor and cognitive systems engineering.
This volume includes proceedings articles presented at the Workshop on Paralinguistic Information and its Integration in Spoken Dialogue Systems held in Granada, Spain. The material focuses on the three broad areas of spoken dialogue systems for robotics, emotions and spoken dialogue systems, and Spoken dialogue systems for real-world applications The workshop proceedings are part of the 3rd Annual International Workshop on Spoken Dialogue Systems, which brings together researchers from all over the world working in the field of spoken dialogue systems. It provides an international forum for the presentation of research and applications, and for lively discussions among researchers as well as industrialists.
Deals with both the ultrashort laser-pulse technology in the few- to mono-cycle region and the laser-surface-controlled scanning-tunneling microscopy (STM) extending into the spatiotemporal extreme technology. The former covers the theory of nonlinear pulse propagation beyond the slowly-varing-envelope approximation, the generation and active chirp compensation of ultrabroadband optical pulses, the amplitude and phase characterization of few- to mono-cycle pulses, and the feedback field control for the mono-cycle-like pulse generation. In addition, the wavelength-multiplex shaping of ultrabroadband pulses, and the carrier-phase measurement and control of few-cycle pulses are described. The latter covers the CW-laser-excitation STM, the femtosecond-time-resolved STM and atomic-level surface phenomena controlled by femtosecond pulses.
This graduate-level text presents the fundamental physics of solid-state lasers, including the basis of laser action and the optical and electronic properties of laser materials. After an overview of the topic, the first part begins with a review of quantum mechanics and solid-state physics, spectroscopy, and crystal field theory; it then treats the quantum theory of radiation, the emission and absorption of radiation, and nonlinear optics; concluding with discussions of lattice vibrations and ion-ion interactions, and their effects on optical properties and laser action. The second part treats specific solid-state laser materials, the prototypical ruby and Nd-YAG systems being treated in greatest detail; and the book concludes with a discussion of novel and non-standard materials. Some knowledge of quantum mechanics and solid-state physics is assumed, but the discussion is as self-contained as possible, making this an excellent reference, as well as useful for independent study.
This book describes breath signal processing technologies and their applications in medical sample classification and diagnosis. First, it provides a comprehensive introduction to breath signal acquisition methods, based on different kinds of chemical sensors, together with the optimized selection and fusion acquisition scheme. It then presents preprocessing techniques, such as drift removing and feature extraction methods, and uses case studies to explore the classification methods. Lastly it discusses promising research directions and potential medical applications of computerized breath diagnosis. It is a valuable interdisciplinary resource for researchers, professionals and postgraduate students working in various fields, including breath diagnosis, signal processing, pattern recognition, and biometrics.
The unprecedented growth in the range of multimedia services offered these days by modern telecommunication systems has been made possible only because of the advancements in signal processing technologies and algorithms. In the area of telecommunications, application of signal processing allows for new generations of systems to achieve performance close to theoretical limits, while in the area of multimedia, signal processing the underlying technology making possible realization of such applications that not so long ago were considered just a science fiction or were not even dreamed about. We all learnt to adopt those achievements very quickly, but often the research enabling their introduction takes many years and a lot of efforts. This book presents a group of invited contributions, some of which have been based on the papers presented at the International Symposium on DSP for Communication Systems held in Coolangatta on the Gold Coast, Australia, in December 2003. Part 1 of the book deals with applications of signal processing to transform what we hear or see to the form that is most suitable for transmission or storage for a future retrieval. The first three chapters in this part are devoted to processing of speech and other audio signals. The next two chapters consider image coding and compression, while the last chapter of this part describes classification of video sequences in the MPEG domain.
This book addresses challenges faced by both the algorithm designer
and the chip designer, who need to deal with the ongoing increase
of algorithmic complexity and required data throughput for today s
mobile applications. The focus is on implementation aspects and
implementation constraints of individual components that are needed
in transceivers for current standards, such as UMTS, LTE, WiMAX and
DVB-S2. The application domain is the so called outer receiver,
which comprises the channel coding, interleaving stages, modulator,
and multiple antenna transmission. Throughout the book, the focus
is on advanced algorithms that are actually in use
The photorefractive effect is now firmly established as one of the highest-sensitivity nonlinear optical effects, making it an attractive choice for use in many optical holographic processing applications. As with all technologies based on advanced materials, the rate of progress in the development of photorefractive applications has been principally limited by the rate at which breakthroughs in materials science have supplied better photorefractive materials. The last ten years have seen an upsurge of interest in photorefractive applications because of several advances in the synthesis and growth of new and sensitive materials. This book is a collection of many of the most important recent developments in photorefractive effects and materials. The introductory chapter, which provides the necessary tools for understanding a wide variety of photorefractive phenomena, is followed by seven contributed chapters that offer views of the state-of-the-art in several different material systems. The second chapter represents the most detailed study to date on the growth and photorefractive performance of BaTi03, one of the most important photorefractive ferroelectrlcs. The third chapter describes the process of permanently fixing holographic gratings in ferroelectrics, important for volumetric data storage with ultra-high data densities. The fourth chapter describes the discovery and theory of photorefractive spatial solitons. Photorefractive polymers are an exciting new class of photo refractive materials, described in the fifth chapter. Polymers have many advantages, primarily related to fabrication, that could promise a breakthrough to the marketplace because of ease and low-cost of manufacturing.
This book covers the diagnosis and assessment of the various faults which can occur in a three phase induction motor, namely rotor broken-bar faults, rotor-mass unbalance faults, stator winding faults, single phasing faults and crawling. Following a brief introduction, the second chapter describes the construction and operation of an induction motor, then reviews the range of known motor faults, some existing techniques for fault analysis, and some useful signal processing techniques. It includes an extensive literature survey to establish the research trends in induction motor fault analysis. Chapters three to seven describe the assessment of each of the five primary fault types. In the third chapter the rotor broken-bar fault is discussed and then two methods of diagnosis are described; (i) diagnosis of the fault through Radar analysis of stator current Concordia and (ii) diagnosis through envelope analysis of motor startup current using Hilbert and Wavelet Transforms. In chapter four, rotor-mass unbalance faults are assessed, and diagnosis of both transient and steady state stator current has been analyzed using different techniques. If both rotor broken-bar and rotor-mass unbalance faults occur simultaneously then for identification an algorithm is provided in this chapter. Chapter five considers stator winding faults and five different analysis techniques, chapter six covers diagnosis of single phasing faults, and chapter seven describes crawling and its diagnosis. Finally, chapter eight focuses on fault assessment, and presents a summary of the book together with a discussion of prospects for future research on fault diagnosis.
Practical methods for the optimal analysis of multispectral and hyperspectral image data The field of remote sensing is a cross-disciplinary one, involving professionals ranging from signal processing engineers to earth science researchers to private and public sector practitioners, in nearly every region of the globe. The Signal Theory Approach offers powerful methods for analyzing the complex data involved in this field–methods which may not be familiar to many in non-engineering fields. In contrast to previous broad surveys of the subject, Signal Theory Methods in Multispectral Remote Sensing focuses on the practical knowledge data users of all types must have to optimally analyze multispectral and hyperspectral image data. Both a textbook and self-teaching reference for professionals in the field, this book covers the fundamentals of the analysis of multispectral and hyperspectral image data from the point of view of signal processing engineering. Avoiding topics common to general treatments of remote sensing but not germane to practical applications, it offers concise discussions of:
As hyperspectral data becomes more widely available, the need for practical ways to analyze the very large volume of hyperspectral data on a personal computer makes this an extremely timely and useful reference for all professionals and researchers involved in remote sensing.
Visual Communication: An Information Theory Approach presents an entirely new look at the assessment and optimization of visual communication channels, such as are employed for telephotography and television. The electro-optical design of image gathering and display devices, and the digital processing for image coding and restoration, have remained independent disciplines which follow distinctly separate traditions; yet the performance of visual communication channels cannot be optimized just by cascading image-gathering devices, image-coding processors, and image-restoration algorithms as the three obligatory, but independent, elements of a modern system. Instead, to produce the best possible picture at the lowest data rate', it is necessary to jointly optimize image gathering, coding, and restoration. Although the mathematical development in Visual Communication: An Information Theory Approach is firmly rooted in familiar concepts of communication theory, it leads to formulations that are significantly different from those that are found in the traditional literature on either rate distortion theory or digital image processing. For example, the Wiener filter, which is perhaps the most common image restoration algorithm in the traditional digital image processing literature, fails to fully account for the constraints of image gathering and display. As demonstrated in the book, digitally restored images improve in sharpness and clarity when these constraints are properly accounted for. Visual Communication: An Information Theory Approach is unique in its extension of modern communication theory to the end-to-end assessment of visual communication. from scene to observer. As such, itties together the traditional textbook literature on electro-optical design and digital image processing. This book serves as an invaluable reference for image processing and electro-optical system design professionals and may be used as a text for advanced courses on the subject.
This book highlights the need for an efficient Handover Decision (HD) mechanism to perform switches from one network to another and to provide unified and continuous mobile services that include seamless connectivity and ubiquitous service access. The author shows how the HD involves efficiently combining handover initiation and network selection process. The author describes how the network selection decision is a challenging task that is a central component to making HD for any mobile user in a heterogeneous environment that involves a number of static and dynamic parameters. The author also discusses prevailing technical challenges like Dynamic Spectrum Allocation (DSA) methods, spectrum sensing, cooperative communications, cognitive network architecture protocol design, cognitive network security challenges and dynamic adaptation algorithms for cognitive system and the evolving behavior of systems in general. The book allows the reader to optimize the sensing time for maximizing the spectrum utilization, improve the lifetime of the cognitive radio network (CRN) using active scan spectrum sensing techniques, analyze energy efficiency of CRN, find a secondary user spectrum allocation, perform dynamic handovers, and use efficient data communication in the cognitive networks. Identifies energy efficient spectrum sensing techniques for Cooperative Cognitive Radio Networks (CRN); Shows how to maximize the energy capacity by minimizing the outage probability; Features end-of-chapter summaries, performance measures, and case studies.
Landmarks are preferred image features for a variety of computer vision tasks such as image mensuration, registration, camera calibration, motion analysis, 3D scene reconstruction, and object recognition. Main advantages of using landmarks are robustness w. r. t. lightning conditions and other radiometric vari ations as well as the ability to cope with large displacements in registration or motion analysis tasks. Also, landmark-based approaches are in general com putationally efficient, particularly when using point landmarks. Note, that the term landmark comprises both artificial and natural landmarks. Examples are comers or other characteristic points in video images, ground control points in aerial images, anatomical landmarks in medical images, prominent facial points used for biometric verification, markers at human joints used for motion capture in virtual reality applications, or in- and outdoor landmarks used for autonomous navigation of robots. This book covers the extraction oflandmarks from images as well as the use of these features for elastic image registration. Our emphasis is onmodel-based approaches, i. e. on the use of explicitly represented knowledge in image analy sis. We principally distinguish between geometric models describing the shape of objects (typically their contours) and intensity models, which directly repre sent the image intensities, i. e., the appearance of objects. Based on these classes of models we develop algorithms and methods for analyzing multimodality im ages such as traditional 20 video images or 3D medical tomographic images."
Realistic and immersive simulations of land, sea, and sky are requisite to the military use of visual simulation for mission planning. Until recently, the simulation of natural environments has been limited first of all by the pixel resolution of visual displays. Visual simulation of those natural environments has also been limited by the scarcity of detailed and accurate physical descriptions of them. Our aim has been to change all that. To this end, many of us have labored in adjacent fields of psych- ogy, engineering, human factors, and computer science. Our efforts in these areas were occasioned by a single question: how distantly can fast-jet pilots discern the aspect angle of an opposing aircraft, in visual simulation? This question needs some ela- ration: it concerns fast jets, because those simulations involve the representation of high speeds over wide swaths of landscape. It concerns pilots, since they begin their careers with above-average acuity of vision, as a population. And it concerns aspect angle, which is as much as to say that the three-dimensional orientation of an opposing aircraft relative to one's own, as revealed by motion and solid form. v vi Preface The single question is by no means simple. It demands a criterion for eye-limiting resolution in simulation. That notion is a central one to our study, though much abused in general discussion. The question at hand, as it was posed in the 1990s, has been accompanied by others.
Covering some of the most cutting-edge research on the delivery and retrieval of interactive multimedia content, this volume of specially chosen contributions provides the most updated perspective on one of the hottest contemporary topics. The material represents extended versions of papers presented at the 11th International Workshop on Image Analysis for Multimedia Interactive Services, a vital international forum on this fast-moving field. Logically organized in discrete sections that approach the subject from its various angles, the content deals in turn with content analysis, motion and activity analysis, high-level descriptors and video retrieval, 3-D and multi-view, and multimedia delivery. The chapters cover the finest detail of emerging techniques such as the use of high-level audio information in improving scene segmentation and the use of subjective logic for forensic visual surveillance. On content delivery, the book examines both images and video, focusing on key subjects including an efficient pre-fetching strategy for JPEG 2000 image sequences. Further contributions look at new methodologies for simultaneous block reconstruction and provide a trellis-based algorithm for faster motion-vector decision making.
More mathematicians have been taking part in the development of digital image processing as a science and the contributions are reflected in the increasingly important role modeling has played solving complex problems. This book is mostly concerned with energy-based models. Through concrete image analysis problems, the author develops consistent modeling, a know-how generally hidden in the proposed solutions. The book is divided into three main parts. The first two parts describe the materials necessary to the models expressed in the third part. These materials include splines (variational approach, regression spline, spline in high dimension), and random fields (Markovian field, parametric estimation, stochastic and deterministic optimization, continuous Gaussian field). Most of these models come from industrial projects in which the author was involved in robot vision and radiography: tracking 3D lines, radiographic image processing, 3D reconstruction and tomography, matching, deformation learning. Numerous graphical illustrations accompany the text showing the performance of the proposed models. This book will be useful to researchers and graduate students in applied mathematics, computer vision, and physics.
"A results-oriented book. Quality line drawings, lucid photography, and informative graphs are used generously... The theoretical rigor of each chapter amply supports the real-world design examples that follow." -- Sensors Magazine "One of the few sources to offer such comprehensive coverage." -- IEEE Electrical Insulation
Traditionally, three-dimensional image analysis (a.k.a. computer vision) and three-dimensional image synthesis (a.k.a. computer graphics) were separate fields. Rarely were experts working in one area interested in and aware of the advances in the other. Over the last decade this has changed dramatically, reflecting the growing maturity of each of these areas. The vision and graphics communities are today engaged in a mutually beneficial exchange, learning from each other and coming up with new ideas and techniques that build on the state of the art in both fields. This book is the result of a fruitful collaboration between scientists at the University of NA1/4rnberg, Germany, who, coming from diverse fields, are working together propelled by the vision of a unified area of three-dimensional image analysis and synthesis. Principles of 3D Image Analysis and Synthesis starts out at the image acquisition end of a hypothetical processing chain, proceeds with analysis, recognition and interpretation of images, towards the representation of scenes by 3D geometry, then back to images via rendering and visualization techniques. Coverage includes discussion of range cameras, multiview image processing, the structure-from-motion problem, object recognition, knowledge-based image analysis, active vision, geometric modeling with meshes and splines, and reverse engineering. Also included is cutting-edge coverage of texturing techniques, global illumination, image-based rendering, volume visualization, flow visualization techniques, and acoustical imaging including object localization from audio and video. This state-of-the-art volume is a concise and readable reference for scientists, engineers, graduate students and educators working in image processing, vision, computer graphics, or visualization.
This volume provides universal methodologies accompanied by Matlab software to manipulate numerous signal and image processing applications. It is done with discrete and polynomial periodic splines. Various contributions of splines to signal and image processing from a unified perspective are presented. This presentation is based on Zak transform and on Spline Harmonic Analysis (SHA) methodology. SHA combines approximation capabilities of splines with the computational efficiency of the Fast Fourier transform. SHA reduces the design of different spline types such as splines, spline wavelets (SW), wavelet frames (SWF) and wavelet packets (SWP) and their manipulations by simple operations. Digital filters, produced by wavelets design process, give birth to subdivision schemes. Subdivision schemes enable to perform fast explicit computation of splines' values at dyadic and triadic rational points. This is used for signals and images up sampling. In addition to the design of a diverse library of splines, SW, SWP and SWF, this book describes their applications to practical problems. The applications include up sampling, image denoising, recovery from blurred images, hydro-acoustic target detection, to name a few. The SWF are utilized for image restoration that was degraded by noise, blurring and loss of significant number of pixels. The book is accompanied by Matlab based software that demonstrates and implements all the presented algorithms. The book combines extensive theoretical exposure with detailed description of algorithms, applications and software. The Matlab software can be downloaded from http://extras.springer.com
"Optical Interconnects in Future Data Center Networks" covers optical networks and how they can be used to provide high bandwidth, energy efficient interconnects for future data centers with increased communication bandwidth requirements. This contributed volume presents an integrated view of the future requirements of the data centers and serves as a reference work for some of the most advanced solutions that have been proposed by major universities and companies. Collecting the most recent and innovative optical interconnects for data center networks that have been presented in the research community by universities and industries, this book is a valuable reference to researchers, students, professors and engineers interested in the domain of high performance interconnects and data center networks. Additionally, "Optical Interconnects in Future Data Center Networks" provides invaluable insights into the benefits and advantages of optical interconnects and how they can be a promising alternative for future data center networks. "
This book focuses on use of voice as a biometric measure for personal authentication. In particular, "Speaker Recognition" covers two approaches in speaker authentication: speaker verification (SV) and verbal information verification (VIV). The SV approach attempts to verify a speaker 's identity based on his/her voice characteristics while the VIV approach validates a speaker 's identity through verification of the content of his/her utterance(s). SV and VIV can be combined for new applications. This is still a new research topic with significant potential applications.The book provides with a broad overview of the recent advances in speaker authentication while giving enough attention to advanced and useful algorithms and techniques. It also provides a step by step introduction to the current state of the speaker authentication technology, from the fundamental concepts to advanced algorithms. We will also present major design methodologies and share our experience in developing real and successful speaker authentication systems. Advanced and useful topics and algorithms are selected with real design examples and evaluation results. Special attention is given to the topics related to improving overall system robustness and performances, such as robust endpoint detection, fast discriminative training theory and algorithms, detection-based decoding, sequential authentication, etc. For example, the sequential authentication was developed based on statistical sequential testing theory. By adding enough subtests, a speaker authentication system can achieve any accuracy requirement. The procedure of designing the sequential authentication will be presented. For any presented technique, we will provide experimental results to validate the usefulness. We will also highlight the important developments in academia, government, and industry, and outline a few open issues.As the methodologies developed in speaker authentication span several diverse fields, the tutorial book provides an introductory forum for a broad spectrum of researchers and developers from different areas to acquire the knowledge and skills to engage in the interdisciplinary fields of user authentication, biometrics, speech and speaker recognition, multimedia, and dynamic pattern recognition. |
You may like...
The Book Of Joy - Lasting Happiness In A…
Dalai Lama, Desmond Tutu
Paperback
Indo-Pak Photography Contest - Life Folk…
White Falcon Publishing
Hardcover
R900
Discovery Miles 9 000
|