![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Physics > Classical mechanics > Sound, vibration & waves (acoustics)
An exciting new development has taken place in the digital era that has captured the imagination and talent of researchers around the globe - wavelet image compression. This technology has deep roots in theories of vision, and promises performance improvements over all other compression methods, such as those based on Fourier transforms, vectors quantizers, fractals, neural nets, and many others. It is this revolutionary new technology that is presented in Wavelet Image and Video Compression, in a form that is accessible to the largest audience possible. Wavelet Image and Video Compression is divided into four parts. Part I, Background Material, introduces the basic mathematical structures that underly image compression algorithms with the intention of providing an easy introduction to the mathematical concepts that are prerequisites for the remainder of the book. It explains such topics as change of bases, scalar and vector quantization, bit allocation and rate-distortion theory, entropy coding, the discrete-cosine transform, wavelet filters and other related topics. Part II, Still Image Coding, presents a spectrum of wavelet still image coding techniques. Part III, Special Topics in Still Image Coding, provides a variety of example coding schemes with a special flavor in either approach or application domain. Part IV, Video Coding, examines wavelet and pyramidal coding techniques for video data. Wavelet Image and Video Compression serves as an excellent reference and may be used as a text for advanced courses covering the subject.
The Nonuniform Discrete Fourier Transform and its Applications in Signal Processing is organized into seven chapters. Chapter 1 introduces the problem of computing frequency samples of the z-transform of a finite-length sequence, and reviews the existing techniques. Chapter 2 develops the basics of the NDFT including its definition, properties and computational aspects. The NDFT is also extended to two dimensions. The ideas introduced here are utilized to develop applications of the NDFT in the following four chapters. Chapter 3 proposes a nonuniform frequency sampling technique for designing 1-D FIR digital filters. Design examples are presented for various types of filters. Chapter 4 utilizes the idea of the 2-D NDFT to design nonseparable 2-D FIR filters of various types. The resulting filters are compared with those designed by other existing methods and the performances of some of these filters are investigated by applying them to the decimation of digital images. Chapter 5 develops a design technique for synthesizing antenna patterns with nulls placed at desired angles to cancel interfering signals coming from these directions. Chapter 6 addresses the application of the NDFT in decoding dual-tone multi-frequency (DTMF) signals and presents an efficient decoding algorithm based on the subband NDFT (SB-NDFT), which achieves a fast, approximate computation of the NDFT. Concluding remarks are included in Chapter 7. The Nonuniform Discrete Fourier Transform and its Applications in Signal Processing serves as an excellent reference for researchers.
The mathematical modelling of changing structures in materials is
of increasing importance to industry where applications of the
theory are found in subjects as diverse as aerospace and medicine.
This book deals with aspects of the nonlinear dynamics of
deformable ordered solids (known as
This book constitutes the first single-volume, English-language treatise on electromagnetic wave propagation across the frequency spectrum.
One of the most intriguing questions in image processing is the problem of recovering the desired or perfect image from a degraded version. In many instances one has the feeling that the degradations in the image are such that relevant information is close to being recognizable, if only the image could be sharpened just a little. This monograph discusses the two essential steps by which this can be achieved, namely the topics of image identification and restoration. More specifically the goal of image identifi cation is to estimate the properties of the imperfect imaging system (blur) from the observed degraded image, together with some (statistical) char acteristics of the noise and the original (uncorrupted) image. On the basis of these properties the image restoration process computes an estimate of the original image. Although there are many textbooks addressing the image identification and restoration problem in a general image processing setting, there are hardly any texts which give an indepth treatment of the state-of-the-art in this field. This monograph discusses iterative procedures for identifying and restoring images which have been degraded by a linear spatially invari ant blur and additive white observation noise. As opposed to non-iterative methods, iterative schemes are able to solve the image restoration problem when formulated as a constrained and spatially variant optimization prob In this way restoration results can be obtained which outperform the lem. results of conventional restoration filters."
The book will appeal to both professional engineers and students and researchers in the subject. From an introduction to the basic terminology and underlying techniques, the book moves on to demonstrate the core enabling technologies, with a broad and balanced perspective given for each topic. Subsequent chapters focus on the applications and give an insight into the process of integrating a range of speech technologies for commercial solutions to customer needs. The book concludes with a speculative review of options for the future.
The Fourier transform is one of the most important mathematical tools in a wide variety of science and engineering fields. Its application - as Fourier analysis or harmonic analysis - provides useful decompositions of signals into fundamental (primitive') components, giving shortcuts in the computation of complicated sums and integrals, and often revealing hidden structure in the data. Fourier Transforms: An Introduction for Engineers develops the basic definitions, properties and applications of Fourier analysis, the emphasis being on techniques for its application to linear systems, although other applications are also considered. The book will serve as both a reference text and a teaching text for a one-quarter or one-semester course covering the application of Fourier analysis to a wide variety of signals, including discrete time (or parameter), continuous time (or parameter), finite duration, and infinite duration. It highlights the common aspects in all cases considered, thereby building an intuition from simple examples that will be useful in the more complicated examples where careful proofs are not included. Fourier Analysis: An Introduction for Engineers is written by two scholars who are recognized throughout the world as leaders in this area, and provides a fresh look at one of the most important mathematical and directly applicable concepts in nearly all fields of science and engineering. Audience: Engineers, especially electrical engineers. The careful treatment of the fundamental mathematical ideas makes the book suitable in all areas where Fourier analysis finds applications.
This book provides an up-to-date introduction to the theory of sound propagation in the ocean. The text treats both ray and wave propagation and pays considerable attention to stochastic problems such as the scattering of sound at rough surfaces and random inhomogeneities. An introductory chapter that discusses the basic experimental data complements the following theoretical chapters. New material has been added throughout for this third edition. New topics covered include: - inter-thermocline lenses and their effect on sound fields- weakly divergent bundles of rays - ocean acoustic tomography - coupled modes - sound scattering by anisotropic volume inhomogeneities with fractal spectra - Voronovich's approach to sound scattering from the rough sea surface. In addition, the list of references has been brought up to date and the latest experimental data have been included.
1.1. Steps in the initial auditory processing. 4 2 THE TIME-FREQUENCY ENERGY REPRESENTATION 2.1. Short-time spectrum of a steady-state Iii. 9 2.2. Smoothed short-time spectra. 9 2.3. Short-time spectra of linear chirps. 13 2.4. Short-time spectra of /w /'s. 15 2.5. Wide band spectrograms of /w /'s. 16 Spectrograms of rapid formant motion. 2.6. 17 2.7. Wigner distribution and spectrogram. 21 2.8. Wigner distribution and spectrogram of cos wot. 23 2.9. Concentration ellipses for transform kernels. 28 2.10. Concentration ellipses for complementary kernels. 42 42 2.11. Directional transforms for a linear chirp. 47 2.12. Spectrograms of /wioi/ with different window sizes. 2.13. Wigner distribution of /wioi/. 49 2.14. Time-frequency autocorrelation function of /wioi/. 49 2.15. Gaussian transform of Iwioi/. 50 2.16. Directional transforms of lwioi/. 52 3 TIME-FREQUENCY FILTERING 3.1. Recovering the transfer function by filtering. 57 3.2. Estimating 'aliased' transfer function. 61 3.3. T-F autocorrelation function of an impulse train. 70 3.4. T-F autocorrelation function of LTI filter output. 70 Windowing recovers transfer function. 3.5. 72 3.6. Shearing the time-frequency autocorrelation function. 75 3.7. T-F autocorrelation function for FM filter. 76 3.8. T-F autocorrelation function of FM filter output. 77 3.9. Windowing recovers transfer function. 79 4 THE SCHEMATIC SPECTROGRAM Problems with pole-fitting approach.
This book concerns a new method of image data compression which weil may supplant the well-established block-transfonn methods that have been state-of-the art for the last 15 years. Subband image coding or SBC was first perfonned as such in 1985, and as the results became known at first through conference proceedings, and later through journal papers, the research community became excited about both the theoretical and practical aspects of this new approach. This excitement is continuing today, with many major research laboratories and research universities around the world investigating the subband approach to coding of color images, high resolution images, video- including video conferencing and advanced tele vision, and the medical application of picture archiving systems. Much of the fruits of this work is summarized in the eight chapters of this book which were written by leading practitioners in this field. The subband approach to image coding starts by passing the image through a two- or three-dimensional filter bank. The two-dimensional (2-D) case usually is hierarchical' consisting of two stages of four filters each. Thus the original image is split into 16 subband images, with each one decimated or subsampled by 4x4, resulting in a data conservation. The individual channel data is then quantized *for digital transmission. In an attractive variation an octave-like approach, herein tenned subband pyramid, is taken for the decomposition resulting in a total of just eleven subbands.
Rate-Quality Optimized Video Coding discusses the matter of optimizing (or negotiating) the data rate of compressed digital video and its quality, which has been a relatively neglected topic in either side of image/video coding and tele-traffic management. Video rate management becomes a technically challenging task since it is required to maintain a certain video quality regardless of the availability of transmission or storage media. This is caused by the broadband nature of digital video and inherent algorithmic features of mainstream video compression schemes, e.g. H.261, H.263 and MPEG series. In order to maximize the media utilization and to enhance video quality, the data rate of compressed video should be regulated within a budget of available media resources while maintaining the video quality as high as possible. In Part I (Chapters 1 to 4) the non-stationarity of digital video is discussed. Since the non-stationary nature is also inherited from algorithmic properties of international video coding standards, which are a combination of statistical coding techniques, the video rate management techniques of these standards are explored. Although there is a series of known video rate control techniques, such as picture rate variation, frame dropping, etc., these techniques do not view the matter as an optimization between rate and quality. From the view of rate-quality optimization, the quantizer is the sole means of controling rate and quality. Thus, quantizers and quantizer control techniques are analyzed, based on the relationship of rate and quality. In Part II (Chapters 5 and 6), as a coherent approach to non-stationary video, established but still thriving nonlinear techniques are applied to video rate-quality optimization such as artificial neural networks including radical basis function networks, and fuzzy logic-based schemes. Conventional linear techniques are also described before the nonlinear techniques are explored. By using these nonlinear techniques, it is shown how they influence and tackle the rate-quality optimization problem. Finally, in Chapter 7 rate-quality optimization issues are reviewed in emerging video communication applications such as video transcoding and mobile video. This chapter discusses some new issues and prospects of rate and quality control in those technology areas. Rate-Quality Optimized Video Coding is an excellent reference and can be used for advanced courses on the topic.
Mobile computing is one of the biggest issues of computer technology, science and industry today. This book looks at the requirements of developing mobile computing systems and the challenges they pose to computer designers. It examines the requirements of mobile computing hardware, infrastructure and communications services. Information security and the data protection aspects of design are considered, together with telecommunications facilities for linking up to the worldwide computer infrastructure. The book also considers the mobility of computer users versus the portability of the equipment. The text also examines current applications of mobile computing in the public sector and future innovative applications.
Speech Recognition has a long history of being one of the difficult problems in Artificial Intelligence and Computer Science. As one goes from problem solving tasks such as puzzles and chess to perceptual tasks such as speech and vision, the problem characteristics change dramatically: knowledge poor to knowledge rich; low data rates to high data rates; slow response time (minutes to hours) to instantaneous response time. These characteristics taken together increase the computational complexity of the problem by several orders of magnitude. Further, speech provides a challenging task domain which embodies many of the requirements of intelligent behavior: operate in real time; exploit vast amounts of knowledge, tolerate errorful, unexpected unknown input; use symbols and abstractions; communicate in natural language and learn from the environment. Voice input to computers offers a number of advantages. It provides a natural, fast, hands free, eyes free, location free input medium. However, there are many as yet unsolved problems that prevent routine use of speech as an input device by non-experts. These include cost, real time response, speaker independence, robustness to variations such as noise, microphone, speech rate and loudness, and the ability to handle non-grammatical speech. Satisfactory solutions to each of these problems can be expected within the next decade. Recognition of unrestricted spontaneous continuous speech appears unsolvable at present. However, by the addition of simple constraints, such as clarification dialog to resolve ambiguity, we believe it will be possible to develop systems capable of accepting very large vocabulary continuous speechdictation.
In multimedia and communication environments all documents must be protected against attacks. The movie Forrest Gump showed how multimedia documents can be manipulated. The required security can be achieved by a number of different security measures. This book provides an overview of the current research in Multimedia and Communication Security. A broad variety of subjects are addressed including: network security; attacks; cryptographic techniques; healthcare and telemedicine; security infrastructures; payment systems; access control; models and policies; auditing and firewalls. This volume contains the selected proceedings of the joint conference on Communications and Multimedia Security; organized by the International Federation for Information processing and supported by the Austrian Computer Society, Gesellschaft fuer Informatik e.V. and TeleTrust Deutschland e.V. The conference took place in Essen, Germany, in September 1996
Speech coding has been an ongoing area of research for several decades, yet the level of activity and interest in this area has expanded dramatically in the last several years. Important advances in algorithmic techniques for speech coding have recently emerged and excellent progress has been achieved in producing high quality speech at bit rates as low as 4.8 kb/s. Although the complexity of the newer more sophisticated algorithms greatly exceeds that of older methods (such as ADPCM), today's powerful programmable signal processor chips allow rapid technology transfer from research to product development and permit many new cost-effective applications of speech coding. In particular, low bit rate voice technology is converging with the needs of the rapidly evolving digital telecom munication networks. The IEEE Workshop on Speech Coding for Telecommunications was held in Vancouver, British Columbia, Canada, from September 5 to 8, 1989. The objective of the workshop was to provide a forum for discussion of recent developments and future directions in speech coding. The workshop attracted over 130 researchers from several countries and its technical program included 51 papers."
Client/Server applications are of increasing importance in industry, and have been improved by advanced distributed object-oriented techniques, dedicated tool support and both multimedia and mobile computing extensions. Recent responses to this trend are standardized distributed platforms and models including the Distributed Computing Environment (DCE) of the Open Software Foundation (OS F), Open Distributed Processing (ODP), and the Common Object Request Broker Architecture (CORBA) of the Object Management Group (OMG). These proceedings are the compilation of papers from the technical stream of the IFIPIIEEE International Conference on Distributed Platforms, Dresden, Germany. This conference has been sponsored by IFIP TC6.1, by the IEEE Communications Society, and by the German Association of Computer Science (GI -Gesellschaft fur Informatik). ICDP'96 was organized jointly by Dresden University of Technology and Aachen University of Technology. It is closely related to the International Workshop on OSF DCE in Karlsruhe, 1993, and to the IFIP International Conference on Open Distributed Processing. ICDP has been designed to bring together researchers and practitioners who are studying and developing new methodologies, tools and technologies for advanced client/server environ ments, distributed systems, and network applications based on distributed platforms."
Most fluid flows of practical importance are fully three-dimensional, so the non-linear instability properties of three-dimensional flows are of particular interest. In some cases the three-dimensionality may have been caused by a finite amplitude disturbance whilst, more usually, the unperturbed state is three-dimensional. Practical applications where transition is thought to be associated with non-linearity in a three- dimensional flow arise, for example, in aerodynamics (swept wings, engine nacelles, etc.), turbines and aortic blood flow. Here inviscid cross-flow' disturbances as well as Tollmien-Schlichting and GArtler vortices can all occur simultaneously and their mutual non-linear behaviour must be understood if transition is to be predicted. The non-linear interactions are so complex that usually fully numerical or combined asymptotic/numerical methods must be used. Moreover, in view of the complexity of the instability processes, there is also a growing need for detailed and accurate experimental information. Carefully conducted tests allow us to identify those elements of a particular problem which are dominant. This assists in both the formulation of a relevant theoretical problem and the subsequent physical validation of predictions. It should be noted that the demands made upon the skills of the experimentalist are high and that the tests can be extremely sophisticated - often making use of the latest developments in flow diagnostic techniques, automated high speed data gathering, data analysis, fast processing and presentation.
What is "digital telephony"? To the authors, the term digital telephony denotes the technology used to provide a completely digital telecommunication system from end-to-end. This implies the use of digital technology from one end instru ment through transmission facilities and switching centers to another end instru ment. Digital telephony has become possible only because of the recent and on going surge of semiconductor developments, allowing microminiaturization and high reliability along with reduced costs. This book deals with both the future and the present. Thus, the first chapter is entitled, "A Network in Transition." As baselines, Chapters 2 and 11 provide the reader with the present status of teler-hone technology in terms of voice digiti zation as well as switching principles. The book is an outgrowth of the authors' consulting and teaching experience in the field since the early 1980s. The book has been written to provide both the engineering student and the practicing engineer a working knowledge of the prin ciples of present and future telecommunication systems based upon the use of the public switched network. Problems or discussion questions have been included at the ends of the chapters to facilitate the book's use as a senior-level or first year graduate-level course text. Numerous clients and associates of the authors as well as hundreds of others have provided useful information and examples for the text, and the authors wish to thank all those who have so contributed either directly or indirectly."
This book provides a comprehensive presentation of the conceptual basis of wavelet analysis, including the construction and analysis of wavelet bases. It motivates the central ideas of wavelet theory by offering a detailed exposition of the Haar series, then shows how a more abstract approach allows readers to generalize and improve upon the Haar series. It then presents a number of variations and extensions of Haar construction.
Coding and Modulation for Digital Television presents a comprehensive description of all error control coding and digital modulation techniques used in Digital Television (DTV). This book illustrates the relevant elements from the expansive theory of channel coding to how the transmission environment dictates the choice of error control coding and digital modulation schemes. These elements are presented in such a way that both the mathematical integrity' and understanding for engineers' are combined in a complete form and supported by a number of practical examples. In addition, the book contains descriptions of the existing standards and provides a valuable source of corresponding references. Coding and Modulation for Digital Television also features a description of the latest techniques, providing the reader with a glimpse of future digital broadcasting. These include the concepts of soft-in-soft-out decoding, turbo-coding and cross-correlated quadrature modulation, all of which will have a prominent future in improving efficiency of the next generation DTV systems. Coding and Modulation for Digital Television is essential reading for all undergraduate and postgraduate students, broadcasting and communication engineers, researchers, marketing managers, regulatory bodies, governmental organizations and standardization institutions of the digital television industry.
Welcome to the fourth IFIP workshop on protocols for high speed networks in Vancouver. This workshop follows three very successful workshops held in Ziirich (1989), Palo Alto (1990) and Stockholm (1993) respectively. We received a large number of papers in response to our call for contributions. This year, forty papers were received of which sixteen were presented as full papers and four were presented as poster papers. Although we received many excellent papers the program committee decided to keep the number of full presentations low in order to accommodate more discussion in keeping with the format of a workshop. Many people have contributed to the success of this workshop including the members of the program committee who, with the additional reviewers, helped make the selection of the papers. We are thankful to all the authors of the papers that were submitted. We also thank several organizations which have contributed financially to this workshop, specially NSERC, ASI, CICSR, UBC, MPR Teltech and Newbridge Networks.
The need for automatic speech recognition systems to be robust with respect to changes in their acoustical environment has become more widely appreciated in recent years, as more systems are finding their way into practical applications. Although the issue of environmental robustness has received only a small fraction of the attention devoted to speaker independence, even speech recognition systems that are designed to be speaker independent frequently perform very poorly when they are tested using a different type of microphone or acoustical environment from the one with which they were trained. The use of microphones other than a "close talking" headset also tends to severely degrade speech recognition -performance. Even in relatively quiet office environments, speech is degraded by additive noise from fans, slamming doors, and other conversations, as well as by the effects of unknown linear filtering arising reverberation from surface reflections in a room, or spectral shaping by microphones or the vocal tracts of individual speakers. Speech-recognition systems designed for long-distance telephone lines, or applications deployed in more adverse acoustical environments such as motor vehicles, factory floors, oroutdoors demand far greaterdegrees ofenvironmental robustness. There are several different ways of building acoustical robustness into speech recognition systems. Arrays of microphones can be used to develop a directionally-sensitive system that resists intelference from competing talkers and other noise sources that are spatially separated from the source of the desired speech signal."
A major advantage of a direct digital synthesizer is that its output frequency, phase and amplitude can be precisely and rapidly manipulated under digital processor control. This book was written to find possible applications for radio communication systems.
Modern airborne and spaceborne imaging radars, known as synthetic aperture radars (SARs), are capable of producing high-quality pictures of the earth's surface while avoiding some of the shortcomings of certain other forms of remote imaging systems. Primarily, radar overcomes the nighttime limitations of optical cameras, and the cloud- cover limitations of both optical and infrared imagers. In addition, because imaging radars use a form of coherent illumination, they can be used in certain special modes such as interferometry, to produce some unique derivative image products that incoherent systems cannot. One such product is a highly accurate digital terrain elevation map (DTEM). The most recent (ca. 1980) version of imaging radar, known as spotlight-mode SAR, can produce imagery with spatial resolution that begins to approach that of remote optical imagers. For all of these reasons, synthetic aperture radar imaging is rapidly becoming a key technology in the world of modern remote sensing. Much of the basic workings' of synthetic aperture radars is rooted in the concepts of signal processing. Starting with that premise, this book explores in depth the fundamental principles upon which the spotlight mode of SAR imaging is constructed, using almost exclusively the language, concepts, and major building blocks of signal processing. Spotlight-Mode Synthetic Aperture Radar: A Signal Processing Approach is intended for a variety of audiences. Engineers and scientists working in the field of remote sensing but who do not have experience with SAR imaging will find an easy entrance into what can seem at times a very complicated subject. Experienced radar engineers will find that the book describes several modern areas of SAR processing that they might not have explored previously, e.g. interferometric SAR for change detection and terrain elevation mapping, or modern non-parametric approaches to SAR autofocus. Senior undergraduates (primarily in electrical engineering) who have had courses in digital signal and image processing, but who have had no exposure to SAR could find the book useful in a one-semester course as a reference. |
![]() ![]() You may like...
Claiming Wagner for France - Music and…
Rachel Orzech
Hardcover
Charles Valentin Alkan - His Life and…
William Alexander Eddie
Hardcover
R4,310
Discovery Miles 43 100
|