![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Image processing > General
In the area of Digital Image Processing the new area of "Time-Varying Image Processing and Moving Oject Recognition" is contributing to impressive advances in several fields. Presented in this volume are new digital image processing and recognition methods, implementation techniques and advanced applications such as television, remote sensing, biomedicine, traffic, inspection, and robotics. New approaches (such as digital transforms, neural networks) for solving 2-D and 3-D problems are described. Many papers concentrate on motion estimation and recognition i.e. tracking of moving objects. Overall, the book describes the state-of-the-art (theory, implementation, applications) of this developing area, together with future trends. The work will be of interest not only to researchers, professors and students in university departments of engineering, communications, computers and automatic control, but also to engineers and managers of industries concerned with computer vision, manufacturing, automation, robotics and quality control.
In Computer Graphics, the use of intelligent techniques started more recently than in other research areas. However, during these last two decades, the use of intelligent Computer Graphics techniques is growing up year after year and more and more interesting techniques are presented in this area. The purpose of this volume is to present current work of the Intelligent Computer Graphics community, a community growing up year after year. This volume is a kind of continuation of the previously published Springer volumes "Artificial Intelligence Techniques for Computer Graphics" (2008), "Intelligent Computer Graphics 2009" (2009), "Intelligent Computer Graphics 2010" (2010) and "Intelligent Computer Graphics 2011" (2011). Usually, this kind of volume contains, every year, selected extended papers from the corresponding 3IA Conference of the year. However, the current volume is made from directly reviewed and selected papers, submitted for publication in the volume "Intelligent Computer Graphics 2012". This year papers are particularly exciting and concern areas like plant modelling, text-to-scene systems, information visualization, computer-aided geometric design, artificial life, computer games, realistic rendering and many other very important themes.
This book provides detailed practical guidelines on how to develop an efficient pathological brain detection system, reflecting the latest advances in the computer-aided diagnosis of structural magnetic resonance brain images. Matlab codes are provided for most of the functions described. In addition, the book equips readers to easily develop the pathological brain detection system further on their own and apply the technologies to other research fields, such as Alzheimer's detection, multiple sclerosis detection, etc.
The design and construction of three-dimensional 3-D] object recognition systems has long occupied the attention of many computer vision researchers. The variety of systems that have been developed for this task is evidence both of its strong appeal to researchers and its applicability to modern manufacturing, industrial, military, and consumer environments. 3-D object recognition is of interest to scientists and engineers in several different disciplines due to both a desire to endow computers with robust visual capabilities, and the wide applications which would benefit from mature and robust vision systems. However, 3-D object recognition is a very complex problem, and few systems have been developed for actual production use; most existing systems have been developed for experimental use by researchers only. This edited collection of papers summarizes the state of the art in 3-D object recognition using examples of existing 3-D systems developed by leading researchers in the field. While most chapters describe a complete object recognition system, chapters on biological vision, sensing, and early processing are also included. The volume will serve as a valuable reference source for readers who are involved in implementing model-based object recognition systems, stimulating the cross-fertilisation of ideas in the various domains.
A smart camera is an integrated machine vision system which, in addition to image capture circuitry, includes a processor, which can extract information from images without need for an external processing unit, and interface devices used to make results available to other devices. This book provides content on smart cameras for an interdisciplinary audience of professionals and students in embedded systems, image processing, and camera technology. It serves as a self-contained, single-source reference for material otherwise found only in sources such as conference proceedings, journal articles, or product data sheets. Coverage includes the 50 year chronology of smart cameras, their technical evolution, the state-of-the art, and numerous applications, such as surveillance and monitoring, robotics, and transportation.
The area of adaptive systems, which encompasses recursive identification, adaptive control, filtering, and signal processing, has been one of the most active areas of the past decade. Since adaptive controllers are fundamentally nonlinear controllers which are applied to nominally linear, possibly stochastic and time-varying systems, their theoretical analysis is usually very difficult. Nevertheless, over the past decade much fundamental progress has been made on some key questions concerning their stability, convergence, performance, and robustness. Moreover, adaptive controllers have been successfully employed in numerous practical applications, and have even entered the marketplace.
With a preface by Ton Kalker. Informed Watermarking is an essential tool for both academic and professional researchers working in the areas of multimedia security, information embedding, and communication. Theory and practice are linked, particularly in the area of multi-user communication. From the Preface: Watermarking has become a more mature discipline with proper foundation in both signal processing and information theory. We can truly say that we are in the era of "second generation" watermarking. This book is first in addressing watermarking problems in terms of second-generation insights. It provides a complete overview of the most important results on capacity and security. The Costa scheme, and in particular a simpler version of it, the Scalar Costa scheme, is studied in great detail. An important result of this book is that it is possible to approach the Shannon limit within a few decibels in a practical system. These results are verified on real-world data, not only the classical category of images, but also on chemical structure sets. Inspired by the work of Moulin and O'Sullivan, this book also addresses security aspects by studying AGWN attacks in terms of game theory. "The authors of Informed Watermarking give a well-written exposA(c) of how watermarking came of age, where we are now, and what to expect in the future. It is my expectation that this book will be a standard reference on second-generation watermarking for the years to come." Ton Kalker, Technische Universiteit Eindhoven
Multimedia Signals and Systems is an essential text for
professional and academic researchers and students in the field of
multimedia.
The book describes a system for visual surveillance using intelligent cameras. The camera uses robust techniques for detecting and tracking moving objects. The real time capture of the objects is then stored in the database. The tracking data stored in the database is analysed to study the camera view, detect and track objects, and study object behavior. These set of models provide a robust framework for coordinating the tracking of objects between overlapping and non-overlapping cameras, and recording the activity of objects detected by the system.
Bayesian Approach to Image Interpretation will interest anyone working in image interpretation. It is complete in itself and includes background material. This makes it useful for a novice as well as for an expert. It reviews some of the existing probabilistic methods for image interpretation and presents some new results. Additionally, there is extensive bibliography covering references in varied areas. For a researcher in this field, the material on synergistic integration of segmentation and interpretation modules and the Bayesian approach to image interpretation will be beneficial. For a practicing engineer, the procedure for generating knowledge base, selecting initial temperature for the simulated annealing algorithm, and some implementation issues will be valuable. New ideas introduced in the book include: New approach to image interpretation using synergism between the segmentation and the interpretation modules. A new segmentation algorithm based on multiresolution analysis. Novel use of the Bayesian networks (causal networks) for image interpretation. Emphasis on making the interpretation approach less dependent on the knowledge base and hence more reliable by modeling the knowledge base in a probabilistic framework. Useful in both the academic and industrial research worlds, Bayesian Approach to Image Interpretation may also be used as a textbook for a semester course in computer vision or pattern recognition.
Super-Resolution Imaging serves as an essential reference for both academicians and practicing engineers. It can be used both as a text for advanced courses in imaging and as a desk reference for those working in multimedia, electrical engineering, computer science, and mathematics. The first book to cover the new research area of super-resolution imaging, this text includes work on the following groundbreaking topics: Image zooming based on wavelets and generalized interpolation; Super-resolution from sub-pixel shifts; Use of blur as a cue; Use of warping in super-resolution; Resolution enhancement using multiple apertures; Super-resolution from motion data; Super-resolution from compressed video; Limits in super-resolution imaging. Written by the leading experts in the field, Super-Resolution Imaging presents a comprehensive analysis of current technology, along with new research findings and directions for future work.
The recent emergence of Local Binary Patterns (LBP) has led to significant progress in applying texture methods to various computer vision problems and applications. The focus of this research has broadened from 2D textures to 3D textures and spatiotemporal (dynamic) textures. Also, where texture was once utilized for applications such as remote sensing, industrial inspection and biomedical image analysis, the introduction of LBP-based approaches have provided outstanding results in problems relating to face and activity analysis, with future scope for face and facial expression recognition, biometrics, visual surveillance and video analysis. "Computer Vision Using Local Binary Patterns" provides a detailed description of the LBP methods and their variants both in spatial and spatiotemporal domains. This comprehensive reference also provides an excellent overview as to how texture methods can be utilized for solving different kinds of computer vision and image analysis problems. Source codes of the basic LBP algorithms, demonstrations, some databases and a comprehensive LBP bibliography can be found from an accompanying web site. Topics include: local binary patterns and their variants in spatial and spatiotemporal domains, texture classification and segmentation, description of interest regions, applications in image retrieval and 3D recognition - Recognition and segmentation of dynamic textures, background subtraction, recognition of actions, face analysis using still images and image sequences, visual speech recognition and LBP in various applications. Written by pioneers of LBP, this book is an essential resource for researchers, professional engineers and graduate students in computer vision, image analysis and pattern recognition. The book will also be of interest to all those who work with specific applications of machine vision.
ICMCCA 2012 is the first International Conference on Multimedia Processing, Communication and Computing Applications and the theme of the Conference is chosen as 'Multimedia Processing and its Applications'. Multimedia processing has been an active research area contributing in many frontiers of today's science and technology. This book presents peer-reviewed quality papers on multimedia processing, which covers a very broad area of science and technology. The prime objective of the book is to familiarize readers with the latest scientific developments that are taking place in various fields of multimedia processing and is widely used in many disciplines such as Medical Diagnosis, Digital Forensic, Object Recognition, Image and Video Analysis, Robotics, Military, Automotive Industries, Surveillance and Security, Quality Inspection, etc. The book will assist the research community to get the insight of the overlapping works which are being carried out across the globe at many medical hospitals and institutions, defense labs, forensic labs, academic institutions, IT companies and security & surveillance domains. It also discusses latest state-of-the-art research problems and techniques and helps to encourage, motivate and introduce the budding researchers to a larger domain of multimedia.
Nature-inspired algorithms such as cuckoo search and firefly algorithm have become popular and widely used in recent years in many applications. These algorithms are flexible, efficient and easy to implement. New progress has been made in the last few years, and it is timely to summarize the latest developments of cuckoo search and firefly algorithm and their diverse applications. This book will review both theoretical studies and applications with detailed algorithm analysis, implementation and case studies so that readers can benefit most from this book. Application topics are contributed by many leading experts in the field. Topics include cuckoo search, firefly algorithm, algorithm analysis, feature selection, image processing, travelling salesman problem, neural network, GPU optimization, scheduling, queuing, multi-objective manufacturing optimization, semantic web service, shape optimization, and others. This book can serve as an ideal reference for both graduates and researchers in computer science, evolutionary computing, machine learning, computational intelligence, and optimization, as well as engineers in business intelligence, knowledge management and information technology. "
Packed with more than 350 techniques, this book delivers what you need to know-on the spot. Its concise presentation of professional techniques is suited to experienced artists whether you are: * Migrating from another visual effects application * Upgrading to Houdini 9 * Seeking a handy reference to raise your proficiency with Houdini Houdini On the Spot presents immediate solutions in an accessible format. It clearly illustrates the essential methods that pros use to get the job done efficiently and creatively. Screenshots and step-by-step instructions show you how to: * Navigate and manipulate the version 9 interface * Create procedural models that can be modified quickly and efficiently with Surface Operators (SOPs) * Use Particle Operators (POPs) to build complex simulations with speed and precision * Minimize the number of operators in your simulations with Dynamics Operators (DOPs) * Extend Houdini with customized tools to include data or scripts with Houdini Digital Assets (HDAs) * Master the version 9 rendering options including Physically Based Rendering (PBR), volume rendering and motion blur * Quickly modify timing, geometry, space and rotational values of your animations with Channel Operators (CHOPs) * Create and manipulate elements with Composite Operators (COPs); Houdini's full-blown compositor toolset * Make your own SOPs, COPs, POPs, CHOPs, and shaders with the Vector Expressions (VEX) shading language * Configure the Houdini interface with customized environments and hotkeys * Mine the treasures of the dozens of standalone applications that are bundled with Houdini
'Subdivision' is a way of representing smooth shapes in a computer. A curve or surface (both of which contain an in?nite number of points) is described in terms of two objects. One object is a sequence of vertices, which we visualise as a polygon, for curves, or a network of vertices, which we visualise by drawing the edges or faces of the network, for surfaces. The other object is a set of rules for making denser sequences or networks. When applied repeatedly, the denser and denser sequences are claimed to converge to a limit, which is the curve or surface that we want to represent. This book focusses on curves, because the theory for that is complete enough that a book claiming that our understanding is complete is exactly what is needed to stimulate research proving that claim wrong. Also because there are already a number of good books on subdivision surfaces. The way in which the limit curve relates to the polygon, and a lot of interesting properties of the limit curve, depend on the set of rules, and this book is about how one can deduce those properties from the set of rules, and how one can then use that understanding to construct rules which give the properties that one wants.
In recent years there has been an increasing interest in Second Generation Image and Video Coding Techniques. These techniques introduce new concepts from image analysis that greatly improve the performance of the coding schemes for very high compression. This interest has been further emphasized by the future MPEG 4 standard. Second generation image and video coding techniques are the ensemble of approaches proposing new and more efficient image representations than the conventional canonical form. As a consequence, the human visual system becomes a fundamental part of the encoding/decoding chain. More insight to distinguish between first and second generation can be gained if it is noticed that image and video coding is basically carried out in two steps. First, image data are converted into a sequence of messages and, second, code words are assigned to the messages. Methods of the first generation put the emphasis on the second step, whereas methods of the second generation put it on the first step and use available results for the second step. As a result of including the human visual system, second generation can be also seen as an approach of seeing the image composed by different entities called objects. This implies that the image or sequence of images have first to be analyzed and/or segmented in order to find the entities. It is in this context that we have selected in this book three main approaches as second generation video coding techniques: Segmentation-based schemes Model Based Schemes Fractal Based Schemes GBP/LISTGBP Video Coding: The Second Generation Approach is an important introduction to the new coding techniques for video. As such, all researchers, students and practitioners working in image processing will find this book of interest.
Bringing together key researchers in disciplines ranging from visualization and image processing to applications in structural mechanics, fluid dynamics, elastography, and numerical mathematics, the workshop that generated this edited volume was the third in the successful Dagstuhl series. Its aim, reflected in the quality and relevance of the papers presented, was to foster collaboration and fresh lines of inquiry in the analysis and visualization of tensor fields, which offer a concise model for numerous physical phenomena. Despite their utility, there remains a dearth of methods for studying all but the simplest ones, a shortage the workshops aim to address. Documenting the latest progress and open research questions in tensor field analysis, the chapters reflect the excitement and inspiration generated by this latest Dagstuhl workshop, held in July 2009. The topics they address range from applications of the analysis of tensor fields to purer research into their mathematical and analytical properties. They show how cooperation and the sharing of ideas and data between those engaged in pure and applied research can open new vistas in the study of tensor fields. "
Accurate Visual Metrology from Single and Multiple Uncalibrated
Images presents novel techniques for constructing three-dimensional
models from bi-dimensional images using virtual reality tools.
Antonio Criminisi develops the mathematical theory of computing
world measurements from single images, and builds up a hierarchy of
novel, flexible techniques to make measurements and reconstruct
three-dimensional scenes from uncalibrated images, paying
particular attention to the accuracy of the reconstruction.
Essential background reading for engineers and scientists working in such fields as communications, control, signal, and image processing, radar and sonar, radio astronomy, seismology, remote sensing, and instrumentation. The book can be used as a textbook for a single course, as well as a combination of an introductory and an advanced course, or even for two separate courses, one in signal detection, the other in estimation.
Learn how and when to apply the latest phase and phase-difference modulation (PDM) techniques with this valuable guide for systems engineers and researchers. It helps you cut design time and fine-tune system performance.
The problem of structure and motion recovery from image sequences is an important theme in computer vision. Considerable progress has been made in this field during the past two decades, resulting in successful applications in robot navigation, augmented reality, industrial inspection, medical image analysis, and digital entertainment, among other areas. However, many of these methods work only for rigid objects and static scenes. The study of non-rigid structure from motion is not only of academic significance, but also has important practical applications in real-world, nonrigid or dynamic scenarios, such as human facial expressions and moving vehicles. This practical guide/reference provides a comprehensive overview of Euclidean structure and motion recovery, with a specific focus on factorization-based algorithms. The book discusses the latest research in this field, including the extension of the factorization algorithm to recover the structure of non-rigid objects, and presents some new algorithms developed by the authors. Readers require no significant knowledge of computer vision, although some background on projective geometry and matrix computation would be beneficial. Topics and features: presents the first systematic study of structure and motion recovery of both rigid and non-rigid objects from images sequences; discusses in depth the theory, techniques, and applications of rigid and non-rigid factorization methods in three dimensional computer vision; examines numerous factorization algorithms, covering affine, perspective and quasi-perspective projection models; provides appendices describing the mathematical principles behind projective geometry, matrix decomposition, least squares, and nonlinear estimation techniques; includes chapter-ending review questions, and a glossary of terms used in the book. This unique text offers practical guidance in real applications and implementations of 3D modeling systems for practitioners in computer vision and pattern recognition, as well as serving as an invaluable source of new algorithms and methodologies for structure and motion recovery for graduate students and researchers.
This book presents the main concepts in handling digital images of mixed content, traditionally referenced as mixed raster content (MRC), in two main parts. The first includes introductory chapters covering the scientific and technical background aspects, whereas the second presents a set of research and development approaches to tackle key issues in MRC segmentation, compression and transmission. The book starts with a review of color theory and the mechanism of color vision in humans. In turn, the second chapter reviews data coding and compression methods so as to set the background and demonstrate the complexity involved in dealing with MRC. Chapter three addresses the segmentation of images through an extensive literature review, which highlights the various approaches used to tackle MRC segmentation. The second part of the book focuses on the segmentation of color images for optimized compression, including multi-layered decomposition and representation of MRC and the processes that can be employed to optimize the coding rates of those different layers. Rounding out the coverage, the final chapter examines the segmentation of color images for optimized transmission.
Image Technology Design: A Perceptual Approach is an essential
reference for both academic and professional researchers in the
fields of image technology, image processing and coding, image
display, and image quality. It bridges the gap between academic
research on visual perception and image quality and applications of
such research in the design of imaging systems.
Any task that involves decision-making can benefit from soft computing techniques which allow premature decisions to be deferred. The processing and analysis of images is no exception to this rule. In the classical image analysis paradigm, the first step is nearly always some sort of segmentation process in which the image is divided into (hopefully, meaningful) parts. It was pointed out nearly 30 years ago by Prewitt (1] that the decisions involved in image segmentation could be postponed by regarding the image parts as fuzzy, rather than crisp, subsets of the image. It was also realized very early that many basic properties of and operations on image subsets could be extended to fuzzy subsets; for example, the classic paper on fuzzy sets by Zadeh [2] discussed the "set algebra" of fuzzy sets (using sup for union and inf for intersection), and extended the defmition of convexity to fuzzy sets. These and similar ideas allowed many of the methods of image analysis to be generalized to fuzzy image parts. For are cent review on geometric description of fuzzy sets see, e. g. , [3]. Fuzzy methods are also valuable in image processing and coding, where learning processes can be important in choosing the parameters of filters, quantizers, etc. |
You may like...
Cardiovascular and Coronary Artery…
Ayman S. El-Baz, Jasjit S. Suri
Paperback
R3,897
Discovery Miles 38 970
Radiomics and Its Clinical Application…
Jie Tian, Di Dong, …
Paperback
R2,536
Discovery Miles 25 360
Image Processing for Automated Diagnosis…
Kalpana Chauhan, Rajeev Kumar Chauhan
Paperback
R3,487
Discovery Miles 34 870
Diagnostic Biomedical Signal and Image…
Kemal Polat, Saban Ozturk
Paperback
R2,952
Discovery Miles 29 520
Emerging Technologies in Intelligent…
V. Santhi, D P Acharjya, …
Hardcover
R5,975
Discovery Miles 59 750
Cardiovascular and Coronary Artery…
Ayman S. El-Baz, Jasjit S. Suri
Paperback
R3,802
Discovery Miles 38 020
Advancements in Bio-Medical Image…
Rijwan Khan, Indrajeet Kumar
Hardcover
R7,955
Discovery Miles 79 550
|