![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Image processing
This book contains the research on modeling bodies, cloth and character based adaptation performed during the last 3 years at MIRALab at the University of Geneva. More than ten researchers have worked together in order to reach a truly 3D Virtual Try On. What we mean by Virtual Try On is the possibility of anyone to give dimensions on her predefined body and obtain her own sized shape body, select a 3D cloth and see oneself animated in Real-Time, walking along a catwalk. Some systems exist today but are unable to adapt to body dimensions, have no real-time animation of body and clothes. A truly system on the web of Virtual Try On does not exist so far. This book is an attempt to explain how to build a 3D Virtual Try On system which is now very much in demand in the clothing industry. To describe this work, the book is divided into five chapters. The first chapter contains a brief historical background of general deformation methods. It ends with a section on the 3D human body scanner systems that are used both for rapid p- totyping and statistical analyses of the human body size variations.
Signal Recovery Techniques for Image and Video Compression and Transmission establishes a bridge between the fields of signal recovery and image and video compression. Traditionally these fields have developed separately because the problems they examined were regarded as very different, and the techniques used appear unrelated. Recently, though, there is growing consent among the research community that the two fields are quite closely related. Indeed, in both fields the objective is to reconstruct the best possible signal from limited information. The field of signal recovery, which is relatively mature, has long been associated with a wealth of powerful mathematical techniques such as Bayesian estimation and the theory of projects onto convex sets (to name just two). This book illustrates for the first time in a complete volume how these techniques can be brought to bear on the very important problems of image and video compression and transmission. Signal Recovery Techniques for Image and Video Compression and Transmission, which is written by leading practitioners in both fields, is one of the first references that addresses this approach and serves as an excellent information source for both researchers and practicing engineers.
The book discusses the impact of machine learning and computational intelligent algorithms on medical image data processing, and introduces the latest trends in machine learning technologies and computational intelligence for intelligent medical image analysis. The topics covered include automated region of interest detection of magnetic resonance images based on center of gravity; brain tumor detection through low-level features detection; automatic MRI image segmentation for brain tumor detection using the multi-level sigmoid activation function; and computer-aided detection of mammographic lesions using convolutional neural networks.
Abstraction is a fundamental mechanism underlying both human and artificial perception, representation of knowledge, reasoning and learning. This mechanism plays a crucial role in many disciplines, notably Computer Programming, Natural and Artificial Vision, Complex Systems, Artificial Intelligence and Machine Learning, Art, and Cognitive Sciences. This book first provides the reader with an overview of the notions of abstraction proposed in various disciplines by comparing both commonalities and differences. After discussing the characterizing properties of abstraction, a formal model, the KRA model, is presented to capture them. This model makes the notion of abstraction easily applicable by means of the introduction of a set of abstraction operators and abstraction patterns, reusable across different domains and applications. It is the impact of abstraction in Artificial Intelligence, Complex Systems and Machine Learning which creates the core of the book. A general framework, based on the KRA model, is presented, and its pragmatic power is illustrated with three case studies: Model-based diagnosis, Cartographic Generalization, and learning Hierarchical Hidden Markov Models.
The related fields of fractal image encoding and fractal image
analysis have blossomed in recent years. This book, originating
from a NATO Advanced Study Institute held in 1995, presents work by
leading researchers. It is developing the subjects at an
introductory level, but it also has some recent and exciting
results in both fields.
Considerable evidence exists that visual sensory information is analyzed simultaneously along two or more independent pathways. In the past two decades, researchers have extensively used the concept of parallel visual channels as a framework to direct their explorations of human vision. More recently, basic and clinical scientists have found such a dichotomy applicable to the way we organize our knowledge of visual development, higher order perception, and visual disorders, to name just a few. This volume attempts to provide a forum for gathering these different perspectives.
"Advanced RenderMan: Creating CGI for Motion Pictures" is
precisely what you and other RenderMan users are dying for. Written
by the world's foremost RenderMan experts, it offers thoroughly
updated coverage of the standard while moving beyond the scope of
the original "RenderMan Companion" to provide in-depth information
on dozens of advanced topics. Both a reference and a tutorial, this
book will quickly prove indispensable, whether you're a technical
director, graphics programmer, modeler, animator, or
hobbyist.
Microscope Image Processing, Second Edition, introduces the basic fundamentals of image formation in microscopy including the importance of image digitization and display, which are key to quality visualization. Image processing and analysis are discussed in detail to provide readers with the tools necessary to improve the visual quality of images, and to extract quantitative information. Basic techniques such as image enhancement, filtering, segmentation, object measurement, and pattern recognition cover concepts integral to image processing. In addition, chapters on specific modern microscopy techniques such as fluorescence imaging, multispectral imaging, three-dimensional imaging and time-lapse imaging, introduce these key areas with emphasis on the differences among the various techniques. The new edition discusses recent developments in microscopy such as light sheet microscopy, digital microscopy, whole slide imaging, and the use of deep learning techniques for image segmentation and analysis with big data image informatics and management. Microscope Image Processing, Second Edition, is suitable for engineers, scientists, clinicians, post-graduate fellows and graduate students working in bioengineering, biomedical engineering, biology, medicine, chemistry, pharmacology and related fields, who use microscopes in their work and would like to understand the methodologies and capabilities of the latest digital image processing techniques or desire to develop their own image processing algorithms and software for specific applications.
"La narraci6n literaria es la evocaci6n de las nostalgias. " ("Literary narration is the evocation of nostalgia. ") G. G. Marquez, interview in Puerta del Sol, VII, 4, 1996. A Personal Prehistory In 1972 I started cooperating with members of the Biodynamics Research Unit at the Mayo Clinic in Rochester, Minnesota, which was under the direction of Earl H. Wood. At that time, their ambitious (and eventually realized) dream was to build the Dynamic Spatial Reconstructor (DSR), a device capable of collecting data regarding the attenuation of X-rays through the human body fast enough for stop-action imaging the full extent of the beating heart inside the thorax. Such a device can be applied to study the dynamic processes of cardiopulmonary physiology, in a manner similar to the application of an ordinary cr (computerized tomography) scanner to observing stationary anatomy. The standard method of displaying the information produced by a cr scanner consists of showing two-dimensional images, corresponding to maps of the X-ray attenuation coefficient in slices through the body. (Since different tissue types attenuate X-rays differently, such maps provide a good visualization of what is in the body in those slices; bone - which attenuates X-rays a lot - appears white, air appears black, tumors typically appear less dark than the surrounding healthy tissue, etc. ) However, it seemed to me that this display mode would not be appropriate for the DSR.
Digital signal processing (DSP) covers a wide range of applications such as signal acquisition, analysis, transmission, storage, and synthesis. Special attention is needed for the VLSI (very large scale integration) implementation of high performance DSP systems with examples from video and radar applications. This book provides basic architectures for VLSI implementations of DSP tasks covering architectures for application specific circuits and programmable DSP circuits. It fills an important gap in the literature by focusing on the transition from algorithms specification to architectures for VLSI implementations. Areas covered include:
Advanced Video-Based Surveillance Systems presents second generation surveillance systems that automatically process large sets of signals for performance monitoring tasks. Included is coverage of different architecture designs, customization of surveillance architecture for end-users, advances in the processing of imaging sequences, security systems, sensors, and remote monitoring projects. Examples are provided of surveillance applications in highway traffic control, subway stations, wireless communications, and other areas. This work will be of interest to researchers in image processing, computer vision, digital signal processing, and telecommunications.
In the past several years, there have been significant technological advances in the field of crisis response. However, many aspects concerning the efficient collection and integration of geo-information, applied semantics and situation awareness for disaster management remain open. Improving crisis response systems and making them intelligent requires extensive collaboration between emergency responders, disaster managers, system designers and researchers alike. To facilitate this process, the Gi4DM (GeoInformation for Disaster Management) conferences have been held regularly since 2005. The events are coordinated by the Joint Board of Geospatial Information Societies (JB GIS) and ICSU GeoUnions. This book presents the outcomes of the Gi4DM 2018 conference, which was organised by the ISPRS-URSI Joint Working Group ICWG III/IVa: Disaster Assessment, Monitoring and Management and held in Istanbul, Turkey on 18-21 March 2018. It includes 12 scientific papers focusing on the intelligent use of geo-information, semantics and situation awareness.
This text explains how advances in wavelet analysis provide new means for multiresolution analysis and describes its wide array of powerful tools. The book covers such topics as: the variations of the windowed Fourier transform; constructions of special waveforms suitable for specific tasks; the use of redundant representations in reconstruction and enhancement; applications of efficient numerical compression as a tool for fast numerical analysis; and approximation properties of various waveforms in different contexts.
This book provides basic theories and implementations using SCILAB open-source software for digital images. The book simplifies image processing theories and well as implementation of image processing algorithms, making it accessible to those with basic knowledge of image processing. This book includes many SCILAB programs at the end of each theory, which help in understanding concepts. The book includes more than sixty SCILAB programs of the image processing theory. In the appendix, readers will find a deeper glimpse into the research areas in the image processing.
Information technology is the enabling foundation for all of human activity at the beginning of the 21st century, and advances in this area are crucial to all of us. These advances are taking place all over the world and can only be followed and perceived when researchers from all over the world assemble, and exchange their ideas in conferences such as the one presented in this proceedings volume regarding the 26th International Symposium on Computer and Information Systems, held at the Royal Society in London on 26th to 28th September 2011. Computer and Information Sciences II contains novel advances in the state of the art covering applied research in electrical and computer engineering and computer science, across the broad area of information technology. It provides access to the main innovative activities in research across the world, and points to the results obtained recently by some of the most active teams in both Europe and Asia.
In contrast with trichromatic image sensors, imaging spectroscopy can capture the properties of the materials in a scene. This implies that scene analysis using imaging spectroscopy has the capacity to robustly encode material signatures, infer object composition and recover photometric parameters. This landmark text/reference presents a detailed analysis of spectral imaging, describing how it can be used in elegant and efficient ways for the purposes of material identification, object recognition and scene understanding. The opportunities and challenges of combining spatial and spectral information are explored in depth, as are a wide range of applications from surveillance and computational photography, to biosecurity and resource exploration. Topics and features: discusses spectral image acquisition by hyperspectral cameras, and the process of spectral image formation; examines models of surface reflectance, the recovery of photometric invariants, and the estimation of the illuminant power spectrum from spectral imagery; describes spectrum representations for the interpolation of reflectance and radiance values, and the classification of spectra; reviews the use of imaging spectroscopy for material identification; explores the recovery of reflection geometry from image reflectance; investigates spectro-polarimetric imagery, and the recovery of object shape and material properties using polarimetric images captured from a single view. An essential resource for researchers and graduate students of computer vision and pattern recognition, this comprehensive introduction to imaging spectroscopy for scene analysis will also be of great use to practitioners interested in shape analysis employing polarimetric imaging, and material recognition and classification using hyperspectral or multispectral data.
This book brings together concepts and approaches from the fields of photogrammetry and computer vision. In particular, it examines techniques relating to quantitative image analysis, such as orientation, camera modelling, system calibration, self-calibration and error handling. The chapters have been contributed by experts in the relevant fields, and there are examples from automated inspection systems and other real-world cases. The book provides study material for students, researchers, developers and practitioners.
This book carries forward recent work on visual patterns and structures in digital images and introduces a near set-based a topology of digital images. Visual patterns arise naturally in digital images viewed as sets of non-abstract points endowed with some form of proximity (nearness) relation. Proximity relations make it possible to construct uniform topologies on the sets of points that constitute a digital image. In keeping with an interest in gaining an understanding of digital images themselves as a rich source of patterns, this book introduces the basics of digital images from a computer vision perspective. In parallel with a computer vision perspective on digital images, this book also introduces the basics of proximity spaces. Not only the traditional view of spatial proximity relations but also the more recent descriptive proximity relations are considered. The beauty of the descriptive proximity approach is that it is possible to discover visual set patterns among sets that are non-overlapping and non-adjacent spatially. By combining the spatial proximity and descriptive proximity approaches, the search for salient visual patterns in digital images is enriched, deepened and broadened. A generous provision of Matlab and Mathematica scripts are used in this book to lay bare the fabric and essential features of digital images for those who are interested in finding visual patterns in images. The combination of computer vision techniques and topological methods lead to a deep understanding of images.
In this book, three main notions will be used in the editors search of improvements in various areas of computer graphics: Artificial Intelligence, Viewpoint Complexity and Human Intelligence. Several Artificial Intelligence techniques are used in presented intelligent scene modelers, mainly declarative ones. Among them, the mostly used techniques are Expert systems, Constraint Satisfaction Problem resolution and Machine-learning. The notion of viewpoint complexity, that is complexity of a scene seen from a given viewpoint, will be used in improvement proposals for a lot of computer graphics problems like scene understanding, virtual world exploration, image-based modeling and rendering, ray tracing and radiosity. Very often, viewpoint complexity is used in conjunction with Artificial Intelligence techniques like Heuristic search and Problem resolution. The notions of artificial Intelligence and Viewpoint Complexity may help to automatically resolve a big number of computer graphics problems. However, there are special situations where is required to find a particular solution for each situation. In such a case, human intelligence has to replace, or to be combined with, artificial intelligence. Such cases, and proposed solutions are also presented in this book.
Image technology is a continually evolving field with various
applications such as image processing and analysis, biometrics,
pattern recognition, object tracking, remote sensing, medicine
diagnoses and multimedia. Significant progress has been made in the
level of interest in image morphology, neural networks, full color
image processing, image data compression, image recognition, and
knowledge -based image analysis systems.
Mathematical Methods for Signal and Image Analysis and Representation presents the mathematical methodology for generic image analysis tasks. In the context of this book an image may be any m-dimensional empirical signal living on an n-dimensional smooth manifold (typically, but not necessarily, a subset of spacetime). The existing literature on image methodology is rather scattered and often limited to either a deterministic or a statistical point of view. In contrast, this book brings together these seemingly different points of view in order to stress their conceptual relations and formal analogies. Furthermore, it does not focus on specific applications, although some are detailed for the sake of illustration, but on the methodological frameworks on which such applications are built, making it an ideal companion for those seeking a rigorous methodological basis for specific algorithms as well as for those interested in the fundamental methodology per se. Covering many topics at the forefront of current research, including anisotropic diffusion filtering of tensor fields, this book will be of particular interest to graduate and postgraduate students and researchers in the fields of computer vision, medical imaging and visual perception.
This book publishes a collection of original scientific research articles that address the state-of-art in using partial differential equations for image and signal processing. Coverage includes: level set methods for image segmentation and construction, denoising techniques, digital image inpainting, image dejittering, image registration, and fast numerical algorithms for solving these problems.
Advancements in digital sensor technology, digital image analysis techniques, as well as computer software and hardware have brought together the fields of computer vision and photogrammetry, which are now converging towards sharing, to a great extent, objectives and algorithms. The potential for mutual benefits by the close collaboration and interaction of these two disciplines is great, as photogrammetric know-how can be aided by the most recent image analysis developments in computer vision, while modern quantitative photogrammetric approaches can support computer vision activities. Devising methodologies for automating the extraction of man-made objects (e.g. buildings, roads) from digital aerial or satellite imagery is an application where this cooperation and mutual support is already reaping benefits. The valuable spatial information collected using these interdisciplinary techniques is of improved qualitative and quantitative accuracy. This book offers a comprehensive selection of high-quality and in-depth contributions from world-wide leading research institutions, treating theoretical as well as implementational issues, and representing the state-of-the-art on this subject among the photogrammetric and computer vision communities.
Due to the rapid increase in readily available computing power, a corre sponding increase in the complexity of problems being tackled has occurred in the field of systems as a whole. A plethora of new methods which can be used on the problems has also arisen with a constant desire to deal with more and more difficult applications. Unfortunately by increasing the ac curacy in models employed along with the use of appropriate algorithms with related features, the resultant necessary computations can often be of very high dimension. This brings with it a whole new breed of problem which has come to be known as "The Curse of Dimensionality" . The expression "Curse of Dimensionality" can be in fact traced back to Richard Bellman in the 1960's. However, it is only in the last few years that it has taken on a widespread practical significance although the term di mensionality does not have a unique precise meaning and is being used in a slightly different way in the context of algorithmic and stochastic complex ity theory or in every day engineering. In principle the dimensionality of a problem depends on three factors: on the engineering system (subject), on the concrete task to be solved and on the available resources. A system is of high dimension if it contains a lot of elements/variables and/or the rela tionship/connection between the elements/variables is complicated."
Principles of Visual Information Retrieval introduces the basic concepts and techniques in VIR and develops a foundation that can be used for further research and study.Divided into 2 parts, the first part describes the fundamental principles. A chapter is devoted to each of the main features of VIR, such as colour, texture and shape-based search. There is coverage of search techniques for time-based image sequences or videos, and an overview of how to combine all the basic features described and integrate context into the search process.The second part looks at advanced topics such as multimedia query, specification, visual learning and semantics, and offers state-of-the-art coverage that is not available in any other book on the market.This book will be essential reading for researchers in VIR, and for final year undergraduate and postgraduate students on courses such as Multimedia Information Retrieval, Multimedia Databases, Computer Vision and Pattern Recognition. |
You may like...
|