![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Image processing > General
Written for students, remote sensing specialists, researchers and SAR system designers, Processing of SAR Data shows how to produce quality SAR images. In particular, this practical reference presents new methods and algorithms concerning the interferometric processing of SAR data with emphasis on system and signal theory, namely how SAR imagery is formed, how interferometry SAR images are created, and a detailed mathematical description of different focussing algorithms. Starting with the processing basics and progressing to the final geo-coded SAR data product, the book describes the complete processing steps in detail. Algorithms based on the effects of side-looking geometry are developed to correct foreshortening, shadowing and layover.
In this book signals or images described by functions whose number of arguments varies from one to five are considered. This arguments can be time, spatial dimensions, or wavelength in a polychromatic signal. The book discusses the basics of mathematical models of signals, their transformations in technical pre-processing systems, and criteria of the systems quality. The models are used for the solution of practical tasks of system analysis, measurement and optimization, and signal restoration. Several examples are given.
In many computer vision applications, objects have to be learned and recognized in images or image sequences. This book presents new probabilistic hierarchical models that allow an efficient representation of multiple objects of different categories, scales, rotations, and views. The idea is to exploit similarities between objects and object parts in order to share calculations and avoid redundant information. Furthermore inference approaches for fast and robust detection are presented. These new approaches combine the idea of compositional and similarity hierarchies and overcome limitations of previous methods. Besides classical object recognition the book shows the use for detection of human poses in a project for gait analysis. The use of activity detection is presented for the design of environments for ageing, to identify activities and behavior patterns in smart homes. In a presented project for parking spot detection using an intelligent vehicle, the proposed approaches are used to hierarchically model the environment of the vehicle for an efficient and robust interpretation of the scene in real-time.
Partial Contents: Reliability Concepts; Device Reliability; Hazard Rates; Monitoring Reliability; Specific Device Information, and more. Appendixes. 60 illustrations.
This book focuses on the use of open source software for geospatial analysis. It demonstrates the effectiveness of the command line interface for handling both vector, raster and 3D geospatial data. Appropriate open-source tools for data processing are clearly explained and discusses how they can be used to solve everyday tasks. A series of fully worked case studies are presented including vector spatial analysis, remote sensing data analysis, landcover classification and LiDAR processing. A hands-on introduction to the application programming interface (API) of GDAL/OGR in Python/C++ is provided for readers who want to extend existing tools and/or develop their own software.
This is the first book about the rapidly evolving field of operational rate distortion (ORD) based video compression. ORD is concerned with the allocation of available bits among the different sources of information in an established coding framework. Today's video compression standards leave great freedom in the selection of key parameters, such as quantizers and motion vectors. The main distinction among different vendors is in the selection of these parameters, and this book presents a mathematical foundation for this selection process. The book contains a review chapter on video compression, a background chapter on optimal bit allocation and the necessary mathematical tools, such as the Lagrangian multiplier method and Dynamic Programming. These two introductory chapters make the book self-contained and provide a fast way of entering this exciting field. Rate-Distortion Based Video Compression establishes a general theory for the optimal bit allocation among dependent quantizers. The minimum total (average) distortion and the minimum maximum distortion cases are discussed. This theory is then used to design efficient motion estimation schemes, video compression schemes and object boundary encoding schemes. For the motion estimation schemes, the theory is used to optimally trade the reduction of energy in the displaced frame difference (DFD) for the increase in the rate required to encode the displacement vector field (DVF). These optimal motion estimators are then used to formulate video compression schemes which achieve an optimal distribution of the available bit rate among DVF, DFD and segmentation. This optimal bit allocation results in very efficient video coders. In the lastpart of the book, the proposed theory is applied to the optimal encoding of object boundaries, where the bit rate needed to encode a given boundary is traded for the resulting geometrical distortion. Again, the resulting boundary encoding schemes are very efficient. Rate-Distortion Based Video Compression is ideally suited for anyone interested in this booming field of research and development, especially engineers who are concerned with the implementation and design of efficient video compression schemes. It also represents a foundation for future research, since all the key elements needed are collected and presented uniformly. Therefore, it is ideally suited for graduate students and researchers working in this field.
Land management issues, such as mapping tree species, recognizing invasive plants, and identifying key geologic features, require an understanding of complex technical issues before the best decisions can be made. Hyperspectral remote sensing is one the technologies that can help with reliable detection and identification. Presenting the fundamentals of remote sensing at an introductory level, Hyperspectral Remote Sensing: Principles and Applications explores all major aspects of hyperspectral image acquisition, exploitation, interpretation, and applications. The book begins with several chapters on the basic concepts and underlying principles of remote sensing images. It introduces spectral radiometry concepts, such as radiance, irradiance, flux, and blackbody radiation; covers imaging spectrometers, examining spectral range, full width half maximum (FWHM), resolution, sampling, signal-to-noise ratio (SNR), and multispectral and hyperspectral sensor systems; and addresses atmospheric interactions. The book then discusses information extraction, with chapters covering the underlying physics principles that lead to the creation of an image and the interpretation of the image's information. The final chapters describe case studies that illustrate the use of hyperspectral remote sensing in agriculture, environmental monitoring, forestry, and geology. After reading this book, you will have a better understanding of how to evaluate different approaches to hyperspectral analyses and to determine which approaches will work for your applications.
Conventional topographic databases, obtained by capture on aerial or spatial images provide a simplified 3D modeling of our urban environment, answering the needs of numerous applications (development, risk prevention, mobility management, etc.). However, when we have to represent and analyze more complex sites (monuments, civil engineering works, archeological sites, etc.), these models no longer suffice and other acquisition and processing means have to be implemented. This book focuses on the study of adapted lifting means for notable buildings . The methods tackled in this book cover lasergrammetry and the current techniques of dense correlation based on images using conventional photogrammetry.
While most other image processing texts approach this subject from an engineering perspective, The Art of Image Processing with Java places image processing within the realm of both engineering and computer science students by emphasizing software design. Ideal for students studying computer science or software engineering, it clearly teaches them the fundamentals of image processing. Accompanied by rich illustrations that demonstrate the results of performing processing on well-known art pieces, the text builds an accessible mathematical foundation and includes extensive sample Java code. Each chapter provides exercises to help students master the material.
The problem of robotic and virtual interaction with physical objects has been the subject of research for many years in both the robotic manipulation and haptics communities. Both communities have focused much attention on human touch-based perception and manipulation, modelling contact between real or virtual hands and objects, or mechanism design. However, as a whole, these problems have not yet been addressed from a unified perspective. This edited book is the outcome of a well-attended workshop which brought together leading scholars from various branches of the robotics, virtual-reality, and human studies communities during the 2004 IEEE International Conference on Robotics and Automation. It covers some of the most challenging problems on the forefront of today 's research on physical interaction with real and virtual objects, with special emphasis on modelling contacts between objects, grasp planning algorithms, haptic perception, and advanced design of hands, devices and interfaces.
Authored by engineers for engineers, this book is designed to be a practical and easy-to-understand solution sourcebook for real-world high-resolution and spot-light SAR image processing. Widely-used algorithms are presented for both system errors and propagation phenomena as well as numerous formerly-classified image examples. As well as providing the details of digital processor implementation, the text presents the polar format algorithm and two modern algorithms for spot-light image formation processing - the range migration algorithm and the chirp scaling algorithm. Bearing practical needs in mind, the authors have included an entire chapter devoted to SAR system performance including image quality metrics and image quality assessment. Another chapter contains image formation processor design examples for two operational fine-resolution SAR systems. This is a reference for radar engineers, managers, system developers, and for students in high-resolution microwave imaging courses. It includes 662 equations, 265 figures, and 55 tables.
This book constitutes the Proceedings of the 26th Symposium on Acoustical Imaging held inWindsor, Ontario, Canada during September 9-12, 2001. This traditional scientific event is recognized as a premier forum for the presentation of advanced research results in both theoretical and experimental development. The lAIS was conceived at a 1967Acoustical Holography meeting in the USA. Since then, these traditional symposia provide an opportunity for specialists who are working in this area to make new acquaintances, renew old friendships and present recent results of their research. Our Symposium has grown significantly in size due to a broad interest in various topics and to the quality of the presentations. For the firsttime in 40 years, the IAIS was held in the province of Ontario in Windsor, Canada's Automotive Capital and City of Roses. The 26th IAIS attracted over 100specialists from 13countries representing this interdisciplinary field in physical acoustics, image processing, applied mathematics, solid-state physics, biology and medicine, industrial applications and quality control technologies. The 26th lAIS was organized in the traditional way with only one addition-a Special Session "History of Acoustical Imaging" with the involvement of such well known scientists as Andrew Briggs, Noriyoshi Chubachi, Robert Green Jr., Joie Jones, Kenneth Erikson, and Bernhard Tittmann. Many of these speakers are well known scientists in their fields and we would like to thank them for making this session extremely successful.
Document imaging is a new discipline in applied computer science. It is building bridges between computer graphics, the world of prepress and press, and the areas of color vision and color reproduction. The focus of this book is of special relevance to people learning how to utilize and integrate such available technology as digital printing or short run color, how to make use of CIM techniques for print products, and how to evaluate related technologies that will become relevant in the next few years. This book is the first to give a comprehensive overview of document imaging, the areas involved, and how they relate. For readers with a background in computer graphics it gives insight into all problems related to putting information in print, a field only very thinly covered in textbooks on computer graphics.
The aim of this volume is to bring together research directions in theoretical signal and imaging processing developed rather independently in electrical engineering, theoretical physics, mathematics and the computer sciences. In particular, mathematically justified algorithms and methods, the mathematical analysis of these algorithms, and methods as well as the investigation of connections between methods from time series analysis and image processing are reviewed. An interdisciplinary comparison of these methods, drawing upon common sets of test problems from medicine and geophysical/environmental sciences, is also addressed. This volume coherently summarizes work carried out in the field of theoretical signal and image processing. It focuses on non-linear and non-parametric models for time series as well as on adaptive methods in image processing.
Methods of signal analysis represent a broad research topic with applications in many disciplines, including engineering, technology, biomedicine, seismography, eco nometrics, and many others based upon the processing of observed variables. Even though these applications are widely different, the mathematical background be hind them is similar and includes the use of the discrete Fourier transform and z-transform for signal analysis, and both linear and non-linear methods for signal identification, modelling, prediction, segmentation, and classification. These meth ods are in many cases closely related to optimization problems, statistical methods, and artificial neural networks. This book incorporates a collection of research papers based upon selected contri butions presented at the First European Conference on Signal Analysis and Predic tion (ECSAP-97) in Prague, Czech Republic, held June 24-27, 1997 at the Strahov Monastery. Even though the Conference was intended as a European Conference, at first initiated by the European Association for Signal Processing (EURASIP), it was very gratifying that it also drew significant support from other important scientific societies, including the lEE, Signal Processing Society of IEEE, and the Acoustical Society of America. The organizing committee was pleased that the re sponse from the academic community to participate at this Conference was very large; 128 summaries written by 242 authors from 36 countries were received. In addition, the Conference qualified under the Continuing Professional Development Scheme to provide PD units for participants and contributors.
Visualization technology is becoming increasingly important for medical and biomedical data processing and analysis. The interaction between visualization and medicine is one of the fastest expanding fields, both scientifically and commercially. This book discusses some of the latest visualization techniques and systems for effective analysis of such diverse, large, complex, and multi-source data.
Digital Imaging Handbook targets anyone with an interest in digital imaging, professional or private, who uses even quite modest equipment such as a PC, digital camera and scanner, a graphics editor such as PAINT, and an inkjet printer. Uniquely, it is intended to fill the gap between the highly technical texts for academics (with access to expensive equipment), and the superficial introductions for amateurs. The four-part treatment spans theory, technology, programs and practice. Theory covers integer arithmetic, additive and subtractive color, greyscales, computational geometry, and a new presentation of discrete Fourier analysis; Technology considers bitmap file structures, scanners, digital cameras, graphic editors, and inkjet printers; Programs develops several processing tools for use in conjunction with a standard Paint graphics editor and supplementary processing tools; Practice discusses 1-bit, greyscale, 4-bit, 8-bit, and 24-bit images for the practice section. Relevant QBASIC code is supplied an accompanying CD and algorithms are listed in the appendix. Readers can attain a level of understanding and the practical insights to obtain optimal use and satisfaction from even the most basic digital-imaging equipment.
Approach your problems from the right end It isn't that they can't see the solution. It is and begin with the answers. Then one day, that they can't see the problem. perhaps you will find the final question. G. K. Chesterton. The Scandal of Father 'The Hermit Clad in Crane Feathers' in R. Brown 'The point of a Pin'. van Gulik's The Chinese Maze Murders. Growing specialization and diversification have brought a host of monographs and textbooks on increasingly specialized topics. However, the "tree" of knowledge of mathematics and related fields does not grow only by putting forth new branches. It also happens, quite often in fact, that branches which were thought to be completely disparate are suddenly seen to be related. Further, the kind and level of sophistication of mathematics applied in various sciences has changed drastically in recent years: measure theory is used (non-trivially) in regional and theoretical economics; algebraic geometry interacts with physics; the Minkowsky lemma, coding theory and the structure of water meet one another in packing and covering theory; quantum fields, crystal defects and mathematical programming profit from homotopy theory; Lie algebras are relevant to filtering; and prediction and electrical engineering can use Stein spaces. And in addition to this there are such new emerging subdisciplines as "experimental mathematics," "CFD," "completely integrable systems," "chaos, synergetics and large-scale order," which are almost impossible to fit into the existing classification schemes. They draw upon widely different sections of mathematics.
After a slow and somewhat tentative beginning, machine vision systems are now finding widespread use in industry. So far, there have been four clearly discernible phases in their development, based upon the types of images processed and how that processing is performed: (1) Binary (two level) images, processing in software (2) Grey-scale images, processing in software (3) Binary or grey-scale images processed in fast, special-purpose hardware (4) Coloured/multi-spectral images Third-generation vision systems are now commonplace, although a large number of binary and software-based grey-scale processing systems are still being sold. At the moment, colour image processing is commercially much less significant than the other three and this situation may well remain for some time, since many industrial artifacts are nearly monochrome and the use of colour increases the cost of the equipment significantly. A great deal of colour image processing is a straightforward extension of standard grey-scale methods. Industrial applications of machine vision systems can also be sub divided, this time into two main areas, which have largely retained distinct identities: (i) Automated Visual Inspection (A VI) (ii) Robot Vision (RV) This book is about a fifth generation of industrial vision systems, in which this distinction, based on applications, is blurred and the processing is marked by being much smarter (i. e. more "intelligent") than in the other four generations."
The purpose of this volume is to present current work of the Intelligent Computer Graphics community, a community growing up year after year. Indeed, if at the beg- ning of Computer Graphics the use of Artificial Intelligence techniques was quite unknown, more and more researchers all over the world are nowadays interested in intelligent techniques allowing substantial improvements of traditional Computer Graphics methods. The other main contribution of intelligent techniques in Computer Graphics is to allow invention of completely new methods, often based on automation of a lot of tasks assumed in the past by the user in an imprecise and (human) time consuming manner. The history of research in Computer Graphics is very edifying. At the beginning, due to the slowness of computers in the years 1960, the unique research concern was visualisation. The purpose of Computer Graphics researchers was to find new visua- sation algorithms, less and less time consuming, in order to reduce the enormous time required for visualisation. A lot of interesting algorithms were invented during these first years of research in Computer Graphics. The scenes to be displayed were very simple because the computing power of computers was very low. So, scene modelling was not necessary and scenes were designed directly by the user, who had to give co-ordinates of vertices of scene polygons.
Focusing on how visual information is represented, stored and extracted in the human brain, this book uses cognitive neural modeling in order to show how visual information is represented and memorized in the brain. Breaking through traditional visual information processing methods, the author combines our understanding of perception and memory from the human brain with computer vision technology, and provides a new approach for image recognition and classification. While biological visual cognition models and human brain memory models are established, applications such as pest recognition and carrot detection are also involved in this book. Given the range of topics covered, this book is a valuable resource for students, researchers and practitioners interested in the rapidly evolving field of neurocomputing, computer vision and machine learning.
Games are poised for a major evolution, driven by growth in technical sophistication and audience reach. Characters that create powerful social and emotional connections with players throughout the game-play itself (not just in cut scenes) will be essential to next-generation games. However, the principles of sophisticated character design and interaction are not widely understood within the game development community. Further complicating the situation are powerful gender and cultural issues that can influence perception of characters. Katherine Isbister has spent the last 10 years examining what makes interactions with computer characters useful and engaging to different audiences. This work has revealed that the key to good design is leveraging player psychology: understanding what's memorable, exciting, and useful to a person about real-life social interactions, and applying those insights to character design. Game designers who create great characters often make use of these psychological principles without realizing it. Better Game Characters by Design gives game design professionals and other interactive media designers a framework for understanding how social roles and perceptions affect players' reactions to characters, helping produce stronger designs and better results.
Computer vision is becoming increasingly important in several industrial applications such as automated inspection, robotic manipulations and autonomous vehicle guidance. These tasks are performed in a 3-D world and it is imperative to gather reliable information on the 3-D structure of the scene. This book is about passive techniques for depth recovery, where the scene is illuminated only by natural light as opposed to active methods where a special lighting device is used for scene illumination. Passive methods have a wider range of applicability and also correspond to the way humans infer 3-D structure from visual images.
Ambulation Analysis in Wearable ECG Subhasis Chaudhuri, Tanmay Pawar, Siddhartha Duttagupta Ambulation Analysis in Wearable ECG demonstrates why, due to recent developments, the wearable ECG recorder substantiates a significant innovation in the healthcare field. About this book:
"Two of the most important trends in sensor development in recent years have been advances in micromachined sensing elements of all kinds, and the increase in intelligence applied at the sensor level. This book addresses both, and provides a good overview of current technology." -- I&CS |
You may like...
Applied Signal and Image Processing…
Rami Qahwaji, Roger Green, …
Hardcover
R4,607
Discovery Miles 46 070
Diagnostic Biomedical Signal and Image…
Kemal Polat, Saban Ozturk
Paperback
R2,952
Discovery Miles 29 520
Recent Trends in Computer-aided…
Saptarshi Chatterjee, Debangshu Dey, …
Paperback
R2,570
Discovery Miles 25 700
Handbook of Pediatric Brain Imaging…
Hao Huang, Timothy Roberts
Paperback
R3,531
Discovery Miles 35 310
Image Processing for Automated Diagnosis…
Kalpana Chauhan, Rajeev Kumar Chauhan
Paperback
R3,487
Discovery Miles 34 870
|