![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Computer vision
Computer systems that analyze images are critical to a wide variety of applications such as visual inspections systems for various manufacturing processes, remote sensing of the environment from space-borne imaging platforms, and automatic diagnosis from X-rays and other medical imaging sources. Professor Azriel Rosenfeld, the founder of the field of digital image analysis, made fundamental contributions to a wide variety of problems in image processing, pattern recognition and computer vision. Professor Rosenfeld's previous students, postdoctoral scientists, and colleagues illustrate in Foundations of Image Understanding how current research has been influenced by his work as the leading researcher in the area of image analysis for over two decades. Each chapter of Foundations of Image Understanding is written by one of the world's leading experts in his area of specialization, examining digital geometry and topology (early research which laid the foundations for many industrial machine vision systems), edge detection and segmentation (fundamental to systems that analyze complex images of our three-dimensional world), multi-resolution and variable resolution representations for images and maps, parallel algorithms and systems for image analysis, and the importance of human psychophysical studies of vision to the design of computer vision systems. Professor Rosenfeld's chapter briefly discusses topics not covered in the contributed chapters, providing a personal, historical perspective on the development of the field of image understanding. Foundations of Image Understanding is an excellent source of basic material for both graduate students entering the field and established researchers who require a compact source for many of the foundational topics in image analysis.
Advancements in digital sensor technology, digital image analysis techniques, as well as computer software and hardware have brought together the fields of computer vision and photogrammetry, which are now converging towards sharing, to a great extent, objectives and algorithms. The potential for mutual benefits by the close collaboration and interaction of these two disciplines is great, as photogrammetric know-how can be aided by the most recent image analysis developments in computer vision, while modern quantitative photogrammetric approaches can support computer vision activities. Devising methodologies for automating the extraction of man-made objects (e.g. buildings, roads) from digital aerial or satellite imagery is an application where this cooperation and mutual support is already reaping benefits. The valuable spatial information collected using these interdisciplinary techniques is of improved qualitative and quantitative accuracy. This book offers a comprehensive selection of high-quality and in-depth contributions from world-wide leading research institutions, treating theoretical as well as implementational issues, and representing the state-of-the-art on this subject among the photogrammetric and computer vision communities.
Gaussian scale-space is one of the best understood multi-resolution techniques available to the computer vision and image analysis community. It is the purpose of this book to guide the reader through some of its main aspects. During an intensive weekend in May 1996 a workshop on Gaussian scale-space theory was held in Copenhagen, which was attended by many of the leading experts in the field. The bulk of this book originates from this workshop. Presently there exist only two books on the subject. In contrast to Lindeberg's monograph (Lindeberg, 1994e) this book collects contributions from several scale space researchers, whereas it complements the book edited by ter Haar Romeny (Haar Romeny, 1994) on non-linear techniques by focusing on linear diffusion. This book is divided into four parts. The reader not so familiar with scale-space will find it instructive to first consider some potential applications described in Part 1. Parts II and III both address fundamental aspects of scale-space. Whereas scale is treated as an essentially arbitrary constant in the former, the latter em phasizes the deep structure, i.e. the structure that is revealed by varying scale. Finally, Part IV is devoted to non-linear extensions, notably non-linear diffusion techniques and morphological scale-spaces, and their relation to the linear case. The Danish National Science Research Council is gratefully acknowledged for providing financial support for the workshop under grant no. 9502164."
In this groundbreaking new volume, computer researchers discuss the development of technologies and specific systems that can interpret data with respect to domain knowledge. Although the chapters each illuminate different aspects of image interpretation, all utilize a common approach - one that asserts such interpretation must involve perceptual learning in terms of automated knowledge acquisition and application, as well as feedback and consistency checks between encoding, feature extraction, and the known knowledge structures in a given application domain. The text is profusely illustrated with numerous figures and tables to reinforce the concepts discussed.
Advances in sensing, signal processing, and computer technology during the past half century have stimulated numerous attempts to design general-purpose ma chines that see. These attempts have met with at best modest success and more typically outright failure. The difficulties encountered in building working com puter vision systems based on state-of-the-art techniques came as a surprise. Perhaps the most frustrating aspect of the problem is that machine vision sys tems cannot deal with numerous visual tasks that humans perform rapidly and effortlessly. In reaction to this perceived discrepancy in performance, various researchers (notably Marr, 1982) suggested that the design of machine-vision systems should be based on principles drawn from the study of biological systems. This "neuro morphic" or "anthropomorphic" approach has proven fruitful: the use of pyramid (multiresolution) image representation methods in image compression is one ex ample of a successful application based on principles primarily derived from the study of biological vision systems. It is still the case, however, that the perfor of computer vision systems falls far short of that of the natural systems mance they are intended to mimic, suggesting that it is time to look even more closely at the remaining differences between artificial and biological vision systems."
Surveillance systems have become increasingly popular. Full involvement of human operators has led to shortcomings, e.g. high labor cost, limited capability for multiple screens, inconsistency in long-duration, etc. Intelligent surveillance systems (ISSs) can supplement or even replace traditional ones. In ISSs, computer vision, pattern recognition, and artificial intelligence technologies are used to identify abnormal behaviours in videos. They present the development of real-time behaviour-based intelligent surveillance systems. The book focuses on the detection of individual abnormal behaviour based on learning and the analysis of dangerous crowd behaviour based on texture and optical flow. Practical systems include a real-time face classification and counting system, a surveillance robot system that utilizes video and audio information for intelligent interaction, and a robust person counting system for crowded environments.
With a focus on the interplay between mathematics and applications of imaging, the first part covers topics from optimization, inverse problems and shape spaces to computer vision and computational anatomy. The second part is geared towards geometric control and related topics, including Riemannian geometry, celestial mechanics and quantum control. Contents: Part I Second-order decomposition model for image processing: numerical experimentation Optimizing spatial and tonal data for PDE-based inpainting Image registration using phase amplitude separation Rotation invariance in exemplar-based image inpainting Convective regularization for optical flow A variational method for quantitative photoacoustic tomography with piecewise constant coefficients On optical flow models for variational motion estimation Bilevel approaches for learning of variational imaging models Part II Non-degenerate forms of the generalized Euler Lagrange condition for state-constrained optimal control problems The Purcell three-link swimmer: some geometric and numerical aspects related to periodic optimal controls Controllability of Keplerian motion with low-thrust control systems Higher variational equation techniques for the integrability of homogeneous potentials Introduction to KAM theory with a view to celestial mechanics Invariants of contact sub-pseudo-Riemannian structures and Einstein Weyl geometry Time-optimal control for a perturbed Brockett integrator Twist maps and Arnold diffusion for diffeomorphisms A Hamiltonian approach to sufficiency in optimal control with minimal regularity conditions: Part I Index
Acquiring spatial data for geoinformation systems is still mainly done by human operators who analyze images using classical photogrammetric equipment or digitize maps, possibly assisted by some low level image processing. Automation of these tasks is difficult due to the complexity of the object, the topography, and the deficiency of current pattern recognition and image analysis tools for achieving a reliable transition from the data to the high level description of topographic objects. It appears that progress in automation only can be achieved by incorporating domain-specific semantic models into the analysis procedures. This volume collects papers which were presented at the Workshop "SMATI '97." The workshop focused on "Semantic Modeling for the Acquisition of Topographic Information from Images and Maps." This volume offers a comprehensive selection of high-quality and in-depth contributions by experts of the field coming from leading research institutes, treating both theoretical and implementation issues and integrating aspects of photogrammetry, cartography, computer vision, and image understanding.
"3D Surface Reconstruction: Multi-Scale Hierarchical Approaches "presents methods to model 3D objects in an incremental way so as to capture more finer details at each step. The configuration of the model parameters, the rationale and solutions are described and discussed in detail so the reader has a strong understanding of the methodology. Modeling starts from data captured by 3D digitizers and makes the process even more clear and engaging. Innovative approaches, based on two popular machine learning paradigms, namely Radial Basis Functions and the Support Vector Machines, are also introduced. These paradigms are innovatively extended to a multi-scale incremental structure, based on a hierarchical scheme. The resulting approaches allow readers to achieve high accuracy with limited computational complexity, and makes the approaches appropriate for online, real-time operation. Applications can be found in any domain in which regression is required. "3D Surface Reconstruction: Multi-Scale Hierarchical Approaches" is designed as a secondary text book or reference for advanced-level students and researchers in computer science. This book also targets practitioners working in computer vision or machine learning related fields.
This book proposes soft computing techniques for segmenting real-life images in applications such as image processing, image mining, video surveillance, and intelligent transportation systems. The book suggests hybrids deriving from three main approaches: fuzzy systems, primarily used for handling real-life problems that involve uncertainty; artificial neural networks, usually applied for machine cognition, learning, and recognition; and evolutionary computation, mainly used for search, exploration, efficient exploitation of contextual information, and optimization. The contributed chapters discuss both the strengths and the weaknesses of the approaches, and the book will be valuable for researchers and graduate students in the domains of image processing and computational intelligence.
Landmarks are preferred image features for a variety of computer vision tasks such as image mensuration, registration, camera calibration, motion analysis, 3D scene reconstruction, and object recognition. Main advantages of using landmarks are robustness w. r. t. lightning conditions and other radiometric vari ations as well as the ability to cope with large displacements in registration or motion analysis tasks. Also, landmark-based approaches are in general com putationally efficient, particularly when using point landmarks. Note, that the term landmark comprises both artificial and natural landmarks. Examples are comers or other characteristic points in video images, ground control points in aerial images, anatomical landmarks in medical images, prominent facial points used for biometric verification, markers at human joints used for motion capture in virtual reality applications, or in- and outdoor landmarks used for autonomous navigation of robots. This book covers the extraction oflandmarks from images as well as the use of these features for elastic image registration. Our emphasis is onmodel-based approaches, i. e. on the use of explicitly represented knowledge in image analy sis. We principally distinguish between geometric models describing the shape of objects (typically their contours) and intensity models, which directly repre sent the image intensities, i. e., the appearance of objects. Based on these classes of models we develop algorithms and methods for analyzing multimodality im ages such as traditional 20 video images or 3D medical tomographic images."
COMPUTER VISION is a field of research that encompasses many objectives. A primary goal has been to construct visual sensors that can provide general-purpose robots with the same information about their surroundings as we receive from our own visual senses. This book takes an important step towards this goal by describing a working computer vision system named SCERPO. This system can recognize known three-dimensional objects in ordinary black-and-white images taken from unknown viewpoints, even when parts of the object are undetectable or hidden from view. A second major goal of computer vision re search is to provide a computational understanding of human vision. The research presented in this book has many implica tions for our understanding of human vision, particularly in the areas of perceptual organization and knowledge-based recogni tion. An attempt has been made to relate each computational result to the relevant areas in the psychology of vision. Since the material is meant to be accessible to a wide range of inter disciplinary readers, the book is written in plain language and attempts to explain most concepts from the starting position of the non-specialist. vii viii PREFACE One of the most important conclusions ansmg from this research is that visual recognition can commonly be achieved directly from the two-dimensional image without any prelim inary reconstruction of depth information or surface orienta tion from the visual input."
This is an examination of the history and the state of the art of the quest for visualizing scientific knowledge and the dynamics of its development. Through an interdisciplinary perspective this book presents profound visions, pivotal advances, and insightful contributions made by generations of researchers and professionals, which portrays a holistic view of the underlying principles and mechanisms of the development of science. This updated and extended second edition: highlights the latest advances in mapping scientific frontiers examines the foundations of strategies, principles, and design patterns provides an integrated and holistic account of major developments across disciplinary boundaries "Anyone who tries to follow the exponential growth of the literature on citation analysis and scientometrics knows how difficult it is to keep pace. Chaomei Chen has identified the significant methods and applications in visual graphics and made them clear to the uninitiated. Derek Price would have loved this book which not only pays homage to him but also to the key players in information science and a wide variety of others in the sociology and history of science." - Eugene Garfield "This is a wide ranging book on information visualization, with a specific focus on science mapping. Science mapping is still in its infancy and many intellectual challenges remain to be investigated and many of which are outlined in the final chapter. In this new edition Chaomei Chen has provided an essential text, useful both as a primer for new entrants and as a comprehensive overview of recent developments for the seasoned practitioner." - Henry Small Chaomei Chen is a Professor in the College of Information Science and Technology at Drexel University, Philadelphia, USA, and a ChangJiang Scholar at Dalian University of Technology, Dalian, China. He is the Editor-in-Chief of Information Visualization and the author of Turning Points: The Nature of Creativity (Springer, 2012) and Information Visualization: Beyond the Horizon (Springer, 2004, 2006).
Face Image Analysis by Unsupervised Learning explores adaptive approaches to image analysis. It draws upon principles of unsupervised learning and information theory to adapt processing to the immediate task environment. In contrast to more traditional approaches to image analysis in which relevant structure is determined in advance and extracted using hand-engineered techniques, Face Image Analysis by Unsupervised Learning explores methods that have roots in biological vision and/or learn about the image structure directly from the image ensemble. Particular attention is paid to unsupervised learning techniques for encoding the statistical dependencies in the image ensemble. The first part of this volume reviews unsupervised learning, information theory, independent component analysis, and their relation to biological vision. Next, a face image representation using independent component analysis (ICA) is developed, which is an unsupervised learning technique based on optimal information transfer between neurons. The ICA representation is compared to a number of other face representations including eigenfaces and Gabor wavelets on tasks of identity recognition and expression analysis. Finally, methods for learning features that are robust to changes in viewpoint and lighting are presented. These studies provide evidence that encoding input dependencies through unsupervised learning is an effective strategy for face recognition. Face Image Analysis by Unsupervised Learning is suitable as a secondary text for a graduate-level course, and as a reference for researchers and practitioners in industry.
This highly anticipated new edition provides a comprehensive account of face recognition research and technology, spanning the full range of topics needed for designing operational face recognition systems. After a thorough introductory chapter, each of the following chapters focus on a specific topic, reviewing background information, up-to-date techniques, and recent results, as well as offering challenges and future directions. Features: fully updated, revised and expanded, covering the entire spectrum of concepts, methods, and algorithms for automated face detection and recognition systems; provides comprehensive coverage of face detection, tracking, alignment, feature extraction, and recognition technologies, and issues in evaluation, systems, security, and applications; contains numerous step-by-step algorithms; describes a broad range of applications; presents contributions from an international selection of experts; integrates numerous supporting graphs, tables, charts, and performance data.
This volume provides universal methodologies accompanied by Matlab software to manipulate numerous signal and image processing applications. It is done with discrete and polynomial periodic splines. Various contributions of splines to signal and image processing from a unified perspective are presented. This presentation is based on Zak transform and on Spline Harmonic Analysis (SHA) methodology. SHA combines approximation capabilities of splines with the computational efficiency of the Fast Fourier transform. SHA reduces the design of different spline types such as splines, spline wavelets (SW), wavelet frames (SWF) and wavelet packets (SWP) and their manipulations by simple operations. Digital filters, produced by wavelets design process, give birth to subdivision schemes. Subdivision schemes enable to perform fast explicit computation of splines' values at dyadic and triadic rational points. This is used for signals and images up sampling. In addition to the design of a diverse library of splines, SW, SWP and SWF, this book describes their applications to practical problems. The applications include up sampling, image denoising, recovery from blurred images, hydro-acoustic target detection, to name a few. The SWF are utilized for image restoration that was degraded by noise, blurring and loss of significant number of pixels. The book is accompanied by Matlab based software that demonstrates and implements all the presented algorithms. The book combines extensive theoretical exposure with detailed description of algorithms, applications and software. The Matlab software can be downloaded from http://extras.springer.com
This book on autonomous road-following vehicles brings together twenty years of innovation in the field. The book uniquely details an approach to real-time machine vision for the understanding of dynamic scenes, viewed from a moving platform that begins with spatio-temporal representations of motion for hypothesized objects whose parameters are adjusted by well-known prediction error feedback and recursive estimation techniques.
This book introduces a new theory in Computer Vision yielding elementary techniques to analyze digital images. These techniques are a mathematical formalization of the Gestalt theory. From the mathematical viewpoint the closest field to it is stochastic geometry, involving basic probability and statistics, in the context of image analysis. The book is mathematically self-contained, needing only basic understanding of probability and calculus. The text includes more than 130 illustrations, and numerous examples based on specific images on which the theory is tested. Detailed exercises at the end of each chapter help the reader develop a firm understanding of the concepts imparted.
In many computer vision applications, objects have to be learned and recognized in images or image sequences. This book presents new probabilistic hierarchical models that allow an efficient representation of multiple objects of different categories, scales, rotations, and views. The idea is to exploit similarities between objects and object parts in order to share calculations and avoid redundant information. Furthermore inference approaches for fast and robust detection are presented. These new approaches combine the idea of compositional and similarity hierarchies and overcome limitations of previous methods. Besides classical object recognition the book shows the use for detection of human poses in a project for gait analysis. The use of activity detection is presented for the design of environments for ageing, to identify activities and behavior patterns in smart homes. In a presented project for parking spot detection using an intelligent vehicle, the proposed approaches are used to hierarchically model the environment of the vehicle for an efficient and robust interpretation of the scene in real-time.
The sixteen chapters included in this book were written by invited experts of international recognition and address important issues in Medical Image Processing and Computational Vision, including: Object Recognition, Object Detection, Object Tracking, Pose Estimation, Facial Expression Recognition, Image Retrieval, Data Mining, Automatic Video Understanding and Management, Edges Detection, Image Segmentation, Modelling and Simulation, Medical thermography, Database Systems, Synthetic Aperture Radar and Satellite Imagery. Different applications are addressed and described throughout the book, comprising: Object Recognition and Tracking, Facial Expression Recognition, Image Database, Plant Disease Classification, Video Understanding and Management, Image Processing, Image Segmentation, Bio-structure Modelling and Simulation, Medical Imaging, Image Classification, Medical Diagnosis, Urban Areas Classification, Land Map Generation. The book brings together the current state-of-the-art in the various multi-disciplinary solutions for Medical Image Processing and Computational Vision, including research, techniques, applications and new trends contributing to the development of the related areas.
Spatial trajectories have been bringing the unprecedented wealth to a variety of research communities. A spatial trajectory records the paths of a variety of moving objects, such as people who log their travel routes with GPS trajectories. The field of moving objects related research has become extremely active within the last few years, especially with all major database and data mining conferences and journals. "Computing with Spatial Trajectories" introduces the algorithms, technologies, and systems used to process, manage and understand existing spatial trajectories for different applications. This book also presents an overview on both fundamentals and the state-of-the-art research inspired by spatial trajectory data, as well as a special focus on trajectory pattern mining, spatio-temporal data mining and location-based social networks. Each chapter provides readers with a tutorial-style introduction to one important aspect of location trajectory computing, case studies and many valuable references to other relevant research work. "Computing with Spatial Trajectories" is designed as a reference or secondary text book for advanced-level students and researchers mainly focused on computer science and geography. Professionals working on spatial trajectory computing will also find this book very useful.
Mobile robots operating in real-world, outdoor scenarios depend on dynamic scene understanding for detecting and avoiding obstacles, recognizing landmarks, acquiring models, and for detecting and tracking moving objects. Motion understanding has been an active research effort for more than a decade, searching for solutions to some of these problems; however, it still remains one of the more difficult and challenging areas of computer vision research. Qualitative Motion Understanding describes a qualitative approach to dynamic scene and motion analysis, called DRIVE (Dynamic Reasoning from Integrated Visual Evidence). The DRIVE system addresses the problems of (a) estimating the robot's egomotion, (b) reconstructing the observed 3-D scene structure; and (c) evaluating the motion of individual objects from a sequence of monocular images. The approach is based on the FOE (focus of expansion) concept, but it takes a somewhat unconventional route. The DRIVE system uses a qualitative scene model and a fuzzy focus of expansion to estimate robot motion from visual cues, to detect and track moving objects, and to construct and maintain a global dynamic reference model.
Machine learning is a novel discipline concerned with the analysis of large and multiple variables data. It involves computationally intensive methods, like factor analysis, cluster analysis, and discriminant analysis. It is currently mainly the domain of computer scientists, and is already commonly used in social sciences, marketing research, operational research and applied sciences. It is virtually unused in clinical research. This is probably due to the traditional belief of clinicians in clinical trials where multiple variables are equally balanced by the randomization process and are not further taken into account. In contrast, modern computer data files often involve hundreds of variables like genes and other laboratory values, and computationally intensive methods are required. This book was written as a hand-hold presentation accessible to clinicians, and as a must-read publication for those new to the methods.
In recent yearswe haveseen considerableadvances in the development of - manoid robots, that is robots with an anthropomorphic design. Such robots should be capable of autonomously performing tasks for their human users in changing environments by adapting to these and to the circumstances at hand. To do so, they as well as any kind of autonomous robot need to have some way of understanding the world around them. We humans do so by our senses, both our far senses vision and hearing (smelling too) and our near senses touch and taste. Vision plays a special role in the way it simulta- ously tells us "where" and "what" in a direct way. It is therefore an accepted factthatto developautonomousrobots,humanoidornot,itisessentialto- clude competent systems for visual perception. Such systems should embody techniques from the ?eld of computer vision, in which sophisticated com- tational methods for extracting information from visual imagery have been developed over a number of decades. However, complete systems incorpor- ing such advanced techniques, while meeting the requirements of real-time processing and adaptivity to the complexity that even our everyday envir- ment displays, are scarce. The present volume takes an important step for ?lling this gap by presenting methods and a system for visual perception for a humanoid robot with speci?c applications to manipulation tasks and to how the robot can learn by imitating the human. |
You may like...
A Research Primer for Technical…
Pam Estes Brewer, George F Hayhoe
Hardcover
R4,347
Discovery Miles 43 470
Academy-Industry Relationships and…
Tracy Bridgeford, Kirk St. Amant
Hardcover
R4,632
Discovery Miles 46 320
|