![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Image processing > General
Traditionally, scientific fields have defined boundaries, and scientists work on research problems within those boundaries. However, from time to time those boundaries get shifted or blurred to evolve new fields. For instance, the original goal of computer vision was to understand a single image of a scene, by identifying objects, their structure, and spatial arrangements. This has been referred to as image understanding. Recently, computer vision has gradually been making the transition away from understanding single images to analyz ing image sequences, or video understanding. Video understanding deals with understanding of video sequences, e. g. , recognition of gestures, activities, fa cial expressions, etc. The main shift in the classic paradigm has been from the recognition of static objects in the scene to motion-based recognition of actions and events. Video understanding has overlapping research problems with other fields, therefore blurring the fixed boundaries. Computer graphics, image processing, and video databases have obvious overlap with computer vision. The main goal of computer graphics is to gener ate and animate realistic looking images, and videos. Researchers in computer graphics are increasingly employing techniques from computer vision to gen erate the synthetic imagery. A good example of this is image-based rendering and modeling techniques, in which geometry, appearance, and lighting is de rived from real images using computer vision techniques. Here the shift is from synthesis to analysis followed by synthesis.
The last few years have seen a great increase in the amount of data available to scientists, yet many of the techniques used to analyse this data cannot cope with such large datasets. Therefore, strategies need to be employed as a pre-processing step to reduce the number of objects or measurements whilst retaining important information. Spectral dimensionality reduction is one such tool for the data processing pipeline. Numerous algorithms and improvements have been proposed for the purpose of performing spectral dimensionality reduction, yet there is still no gold standard technique. This book provides a survey and reference aimed at advanced undergraduate and postgraduate students as well as researchers, scientists, and engineers in a wide range of disciplines. Dimensionality reduction has proven useful in a wide range of problem domains and so this book will be applicable to anyone with a solid grounding in statistics and computer science seeking to apply spectral dimensionality to their work.
Computer and Information Sciences is a unique and comprehensive review of advanced technology and research in the field of Information Technology. It provides an up to date snapshot of research in Europe and the Far East (Hong Kong, Japan and China) in the most active areas of information technology, including Computer Vision, Data Engineering, Web Engineering, Internet Technologies, Bio-Informatics and System Performance Evaluation Methodologies.
Software Visualization: From Theory to Practice was initially
selected as a special volume for "The Annals of Software
Engineering (ANSE) Journal," which has been discontinued. This
special edited volume, is the first to discuss software
visualization in the perspective of software engineering. It is a
collection of 14 chapters on software visualization, covering the
topics from theory to practical systems. The chapters are divided
into four Parts: Visual Formalisms, Human Factors, Architectural
Visualization, and Visualization in Practice. They cover a
comprehensive range of software visualization topics, including
Software Visualization: From Theory to Practice is designed to meet the needs of both an academic and a professional audience composed of researchers and software developers. This book is also suitable for senior undergraduate and graduate students in software engineering and computer science, as a secondary text or a reference.
Brain imaging brings together the technology, methodology, research questions and approaches of a wide range of scientific fields including physics, statistics, computer science, neuroscience, biology, and engineering. Thus, methodological and technological advances that enable us to obtain measurements, examine relationships across observations, and link these data to neuroscientific hypotheses happen in a highly interdisciplinary environment. The dynamic field of machine learning with its modern approach to data mining provides many relevant approaches for neuroscience and enables the exploration of open questions. This state-of-the-art survey offers a collection of papers from the Workshop on Machine Learning and Interpretation in Neuroimaging, MLINI 2011, held at the 25th Annual Conference on Neural Information Processing, NIPS 2011, in the Sierra Nevada, Spain, in December 2011. Additionally, invited speakers agreed to contribute reviews on various aspects of the field, adding breadth and perspective to the volume. The 32 revised papers were carefully selected from 48 submissions. At the interface between machine learning and neuroimaging the papers aim at shedding some light on the state of the art in this interdisciplinary field. They are organized in topical sections on coding and decoding, neuroscience, dynamcis, connectivity, and probabilistic models and machine learning.
This uniquely authoritative and comprehensive handbook is the first to cover the vast field of formal languages, as well as its traditional and most recent applications to such diverse areas as linguistics, developmental biology, computer graphics, cryptology, molecular genetics, and programming languages. No other work comes even close to the scope of this one. The editors are extremely well-known theoretical computer scientists, and each individual topic is presented by the leading authorities in the particular field. The maturity of the field makes it possible to include a historical perspective in many presentations. The work is divided into three volumes, which may be purchased as a set.
Consumer electronics (CE) devices, providing multimedia entertainment and enabling communication, have become ubiquitous in daily life. However, consumer interaction with such equipment currently requires the use of devices such as remote controls and keyboards, which are often inconvenient, ambiguous and non-interactive. An important challenge for the modern CE industry is the design of user interfaces for CE products that enable interactions which are natural, intuitive and fun. As many CE products are supplied with microphones and cameras, the exploitation of both audio and visual information for interactive multimedia is a growing field of research. Collecting together contributions from an international selection of experts, including leading researchers in industry, this unique text presents the latest advances in applications of multimedia interaction and user interfaces for consumer electronics. Covering issues of both multimedia content analysis and human-machine interaction, the book examines a wide range of techniques from computer vision, machine learning, audio and speech processing, communications, artificial intelligence and media technology. Topics and features: introduces novel computationally efficient algorithms to extract semantically meaningful audio-visual events; investigates modality allocation in intelligent multimodal presentation systems, taking into account the cognitive impacts of modality on human information processing; provides an overview on gesture control technologies for CE; presents systems for natural human-computer interaction, virtual content insertion, and human action retrieval; examines techniques for 3D face pose estimation, physical activity recognition, and video summary quality evaluation; discusses the features that characterize the new generation of CE and examines how web services can be integrated with CE products for improved user experience. This book is an essential resource for researchers and practitioners from both academia and industry working in areas of multimedia analysis, human-computer interaction and interactive user interfaces. Graduate students studying computer vision, pattern recognition and multimedia will also find this a useful reference.
This volume constitutes the refereed proceedings of the Second International Conference on Multimedia and Signal Processing, CMSP 2012, held in Shanghai, China, in December 2012. The 79 full papers included in the volume were selected from 328 submissions from 10 different countries and regions. The papers are organized in topical sections on computer and machine vision, feature extraction, image enhancement and noise filtering, image retrieval, image segmentation, imaging techniques & 3D imaging, pattern recognition, multimedia systems, architecture, and applications, visualization, signal modeling, identification & prediction, speech & language processing, time-frequency signal analysis.
With a preface by Ton Kalker. Informed Watermarking is an essential tool for both academic and professional researchers working in the areas of multimedia security, information embedding, and communication. Theory and practice are linked, particularly in the area of multi-user communication. From the Preface: Watermarking has become a more mature discipline with proper foundation in both signal processing and information theory. We can truly say that we are in the era of second generation watermarking. This book is first in addressing watermarking problems in terms of second-generation insights. It provides a complete overview of the most important results on capacity and security. The Costa scheme, and in particular a simpler version of it, the Scalar Costa scheme, is studied in great detail. An important result of this book is that it is possible to approach the Shannon limit within a few decibels in a practical system. These results are verified on real-world data, not only the classical category of images, but also on chemical structure sets.Inspired by the work of Moulin and O'Sullivan, this book also addresses security aspects by studying AGWN attacks in terms of game theory. The authors of Informed Watermarking give a well-written expose of how watermarking came of age, where we are now, and what to expect in the future. It is my expectation that this book will be a standard reference on second-generation watermarking for the years to come. Ton Kalker, Technische Universiteit Eindhoven
Exploration of Visual Data presents latest research efforts in the area of content-based exploration of image and video data. The main objective is to bridge the semantic gap between high-level concepts in the human mind and low-level features extractable by the machines. The two key issues emphasized are "content-awareness" and "user-in-the-loop". The authors provide a comprehensive review on algorithms for visual feature extraction based on color, texture, shape, and structure, and techniques for incorporating such information to aid browsing, exploration, search, and streaming of image and video data. They also discuss issues related to the mixed use of textual and low-level visual features to facilitate more effective access of multimedia data. Exploration of Visual Data provides state-of-the-art materials on the topics of content-based description of visual data, content-based low-bitrate video streaming, and latest asymmetric and nonlinear relevance feedback algorithms, which to date are unpublished.
Traditionally, scientific fields have defined boundaries, and scientists work on research problems within those boundaries. However, from time to time those boundaries get shifted or blurred to evolve new fields. For instance, the original goal of computer vision was to understand a single image of a scene, by identifying objects, their structure, and spatial arrangements. This has been referred to as image understanding. Recently, computer vision has gradually been making the transition away from understanding single images to analyz ing image sequences, or video understanding. Video understanding deals with understanding of video sequences, e. g. , recognition of gestures, activities, fa cial expressions, etc. The main shift in the classic paradigm has been from the recognition of static objects in the scene to motion-based recognition of actions and events. Video understanding has overlapping research problems with other fields, therefore blurring the fixed boundaries. Computer graphics, image processing, and video databases have obvious overlap with computer vision. The main goal of computer graphics is to gener ate and animate realistic looking images, and videos. Researchers in computer graphics are increasingly employing techniques from computer vision to gener ate the synthetic imagery. A good example of this is image-based rendering and modeling techniques, in which geometry, appearance, and lighting is de rived from real images using computer vision techniques. Here the shift is from synthesis to analysis followed by synthesis.
Restricted-orientation convexity is the study of geometric objects whose intersections with lines from some fixed set are connected. This notion generalizes standard convexity and several types of nontraditional convexity. The authors explore the properties of this generalized convexity in multidimensional Euclidean space, and describ restricted-orientation analogs of lines, hyperplanes, flats, halfspaces, and identify major properties of standard convex sets that also hold for restricted-orientation convexity. They then introduce the notion of strong restricted-orientation convexity, which is an alternative generalization of convexity, and show that its properties are also similar to that of standard convexity.
Data visualization is currently a very active and vital area of
research, teaching and development. The term unites the established
field of scientific visualization and the more recent field of
information visualization. The success of data visualization is due
to the soundness of the basic idea behind it: the use of
computer-generated images to gain insight and knowledge from data
and its inherent patterns and relationships. A second premise is
the utilization of the broad bandwidth of the human sensory system
in steering and interpreting complex processes, and simulations
involving data sets from diverse scientific disciplines and large
collections of abstract data from many sources. -Visualization Algorithms and Techniques; Data Visualization: The State of the Art" "presents the state of the art in scientific and information visualization techniques by experts in this field. It can serve as an overview for the inquiring scientist, and as a basic foundation for developers. This edited volume contains chapters dedicated to surveys of specific topics, and a great deal of original work not previously published illustrated by examples from a wealth of applications. The book will also provide basic material for teaching the state of the art techniques in data visualization. Data Visualization: The State of the Art is designed to meet the needs of practitioners and researchers in scientific and information visualization. This book is also suitable as a secondary text for graduate level students in computer science and engineering.
Topics in Knot Theory is a state of the art volume which presents surveys of the field by the most famous knot theorists in the world. It also includes the most recent research work by graduate and postgraduate students. The new ideas presented cover racks, imitations, welded braids, wild braids, surgery, computer calculations and plottings, presentations of knot groups and representations of knot and link groups in permutation groups, the complex plane and/or groups of motions. For mathematicians, graduate students and scientists interested in knot theory.
Multi-Frame Motion-Compensated Prediction for Video Transmission presents a comprehensive description of a new technique in video coding and transmission. The work presented in the book has had a very strong impact on video coding standards and will be of interest to practicing engineers and researchers as well as academics. The multi-frame technique and the Lagrangian coder control have been adopted by the ITU-T as an integral part of the well known H.263 standard and are were adopted in the ongoing H.26L project of the ITU-T Video Coding Experts Group. This work will interest researchers and students in the field of video coding and transmission. Moreover, engineers in the field will also be interested since an integral part of the well known H.263 standard is based on the presented material.
Numbering with colors is tutorial in nature, with many practical examples given throughout the presentation. It is heavily illustrated with gray-scale images, but also included is an 8-page signature of 4-color illustrations to support the presentation. While the organization is somewhat similar to that found in "The Data Handbook," there is little overlap with the content material in that publication. The first section in the book discusses Color Physics, Physiology and Psychology, talking about the details of the eye, the visual pathway, and how the brain converts colors into perceptions of hues. This is followed by the second section, in which Color Technologies are explained, i.e. how we describe colors using the CIE diagram, and how colors can be reproduced using various technologies such as offset printing and video screens. The third section of the book, Using Colors, relates how scientists and engineers can use color to help gain insight into their data sets through true color, false color, and pseudocolor imaging.
L systems are language-theoretic models for developmental biology. They wereintroduced in 1968 by Aristid Lindenmayer (1925-1989) and have proved to be among the most beautiful examples of interdisciplinary science, where work in one area induces fruitful ideas and results in other areas. L systemsare based on relational and set-theoretic concepts, which are more suitable for the discrete and combinatorial structures of biology than mathematical models based on calculus or statistics. L systems have stimulated new work not only in the realistic simulation of developing organisms but also in the theory of automata and formal languages, formal power series, computer graphics, and combinatorics of words. This book contains research papers by almost all leading authorities and by many of the most promising young researchers in the field. The 28 contributions are organized in sections on basic L systems, computer graphics, graph grammars and map L systems, biological aspects and models, and variations and generalizations of L systems. The introductory paper by Lindenmayer and J}rgensen was written for a wide audience and is accessible to the non-specialist reader. The volume documents the state of the art in the theory of L systems and their applications. It will interest researchers and advanced students in theoretical computer science and developmental biology as well as professionals in computer graphics.
Depth recovery is important in machine vision applications when a 3-dimensional structure must be derived from 2-dimensional images. This is an active area of research with applications ranging from industrial robotics to military imaging. This book provides the comprehensive details of the methodology, along with the complete mathematics and algorithms involved. Many new models, both deterministic and statistical, are introduced.
Mathematical morphology (MM) is a theory for the analysis of spatial structures. It is called morphology since it aims at analysing the shape and form of objects, and it is mathematical in the sense that the analysis is based on set theory, topology, lattice algebra, random functions, etc. MM is not only a theory, but also a powerful image analysis technique. The purpose of the present book is to provide the image analysis community with a snapshot of current theoretical and applied developments of MM. The book consists of forty-five contributions classified by subject. It demonstrates a wide range of topics suited to the morphological approach.
'Subdivision' is a way of representing smooth shapes in a computer. A curve or surface (both of which contain an in?nite number of points) is described in terms of two objects. One object is a sequence of vertices, which we visualise as a polygon, for curves, or a network of vertices, which we visualise by drawing the edges or faces of the network, for surfaces. The other object is a set of rules for making denser sequences or networks. When applied repeatedly, the denser and denser sequences are claimed to converge to a limit, which is the curve or surface that we want to represent. This book focusses on curves, because the theory for that is complete enough that a book claiming that our understanding is complete is exactly what is needed to stimulate research proving that claim wrong. Also because there are already a number of good books on subdivision surfaces. The way in which the limit curve relates to the polygon, and a lot of interesting properties of the limit curve, depend on the set of rules, and this book is about how one can deduce those properties from the set of rules, and how one can then use that understanding to construct rules which give the properties that one wants.
This book constitutes the refereed proceedings of the 5th International Workshop on Motion in Games, held in Rennes, France, in November 2012. The 23 revised full papers presented together with 9 posters and 5 extended abstracts were carefully reviewed and selected from numerous submissions. The papers are organized in topical sections on planning, interaction, physics, perception, behavior, virtual humans, locomotion, and motion capture.
he problem of analyzing sequences of images to extract three-dimensional T motion and structure has been at the heart of the research in computer vi sion for many years. It is very important since its success or failure will determine whether or not vision can be used as a sensory process in reactive systems. The considerable research interest in this field has been motivated at least by the following two points: 1. The redundancy of information contained in time-varying images can over come several difficulties encountered in interpreting a single image. 2. There are a lot of important applications including automatic vehicle driv ing, traffic control, aerial surveillance, medical inspection and global model construction. However, there are many new problems which should be solved: how to effi ciently process the abundant information contained in time-varying images, how to model the change between images, how to model the uncertainty inherently associated with the imaging system and how to solve inverse problems which are generally ill-posed. There are of course many possibilities for attacking these problems and many more remain to be explored. We discuss a few of them in this book based on work carried out during the last five years in the Computer Vision and Robotics Group at INRIA (Institut National de Recherche en Informatique et en Automatique)."
Physics-Based Deformable Models presents a systematic physics-based framework for modeling rigid, articulated, and deformable objects, their interactions with the physical world, and the estimate of their shape and motion from visual data. This book presents a large variety of methods and associated experiments in computer vision, graphics and medical imaging that help the reader better to understand the presented material. In addition, special emphasis has been given to the development of techniques with interactive or close to real-time performance. Physics-Based Deformable Models is suitable as a secondary text for graduate level courses in Computer Graphics, Computational Physics, Computer Vision, Medical Imaging, and Biomedical Engineering. In addition, this book is appropriate as a reference for researchers and practitioners in the above-mentioned fields.
Acoustic Signal Processing for Ocean Explortion has two major goals: (i) to present signal processing algorithms that take into account the models of acoustic propagation in the ocean and; (ii) to give a perspective of the broad set of techniques, problems, and applications arising in ocean exploration. The book discusses related issues and problems focused in model based acoustic signal processing methods. Besides addressing the problem of the propagation of acoustics in the ocean, it presents relevant acoustic signal processing methods like matched field processing, array processing, and localization and detection techniques. These more traditional contexts are herein enlarged to include imaging and mapping, and new signal representation models like time/frequency and wavelet transforms. Several applied aspects of these topics, such as the application of acoustics to fisheries, sea floor swath mapping by swath bathymetry and side scan sonar, autonomous underwater vehicles and communications in underwater are also considered.
In the early 1990s, the establishment of the Internet brought forth a revolutionary viewpoint of information storage, distribution, and processing: the World Wide Web is becoming an enormous and expanding distributed digital library. Along with the development of the Web, image indexing and retrieval have grown into research areas sharing a vision of intelligent agents. Far beyond Web searching, image indexing and retrieval can potentially be applied to many other areas, including biomedicine, space science, biometric identification, digital libraries, the military, education, commerce, culture and entertainment. Machine Learning and Statistical Modeling Approaches to Image Retrieval describes several approaches of integrating machine learning and statistical modeling into an image retrieval and indexing system that demonstrates promising results. The topics of this book reflect authors' experiences of machine learning and statistical modeling based image indexing and retrieval. This book contains detailed references for further reading and research in this field as well. |
You may like...
Cardiovascular and Coronary Artery…
Ayman S. El-Baz, Jasjit S. Suri
Paperback
R3,897
Discovery Miles 38 970
Intelligent Image and Video Compression…
David R. Bull, Fan Zhang
Paperback
R2,606
Discovery Miles 26 060
Cognitive Systems and Signal Processing…
Yudong Zhang, Arun Kumar Sangaiah
Paperback
R2,587
Discovery Miles 25 870
Diagnostic Biomedical Signal and Image…
Kemal Polat, Saban Ozturk
Paperback
R2,952
Discovery Miles 29 520
Cardiovascular and Coronary Artery…
Ayman S. El-Baz, Jasjit S. Suri
Paperback
R3,802
Discovery Miles 38 020
Usability Testing for Survey Research
Emily Geisen, Jennifer Romano Bergstrom
Paperback
Advancements in Bio-Medical Image…
Rijwan Khan, Indrajeet Kumar
Hardcover
R7,955
Discovery Miles 79 550
|