![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Artificial intelligence > Computer vision
There are many good AI books. Usually they consecrate at most one or two chapters to the imprecision knowledge processing. To our knowledge this is among the few books to be entirely dedicated to the treatment of knowledge imperfection when bui- ing intelligent systems. We consider that an entire book should be focused on this important aspect of knowledge processing. The expected audience for this book - cludes undergraduate students in computer science, IT&C, mathematics, business, medicine, etc. , graduates, specialists and researchers in these fields. The subjects treated in the book include expert systems, knowledge representation, reasoning under knowledge Imperfection (Probability Theory, Possibility Theory, Belief Theory, and Approximate Reasoning). Most of the examples discussed in details throughout the book are from the medical domain. Each chapter ends with a set of carefully pe- gogically chosen exercises, which complete solution provided. Their understanding will trigger the comprehension of the theoretical notions, concepts and results. Chapter 1 is dedicated to the review of expert systems. Hence are briefly discussed production rules, structure of ES, reasoning in an ES, and conflict resolution. Chapter 2 treats knowledge representation. That includes the study of the differences between data, information and knowledge, logical systems with focus on predicate calculus, inference rules in classical logic, semantic nets and frames.
How does one determine how similar two maps are? This book aims at the theory of spatial similarity relations and its application in automated map generalization, including the definitions, classification and features of spatial similarity relations. Included also are calculation models of spatial similarity relations between arbitrary individual objects and between arbitrary object groups, and the application of the theory in the automation of the algorithms and procedures in map generalization.
This indispensable text introduces the foundations of three-dimensional computer vision and describes recent contributions to the field. Fully revised and updated, this much-anticipated new edition reviews a range of triangulation-based methods, including linear and bundle adjustment based approaches to scene reconstruction and camera calibration, stereo vision, point cloud segmentation, and pose estimation of rigid, articulated, and flexible objects. Also covered are intensity-based techniques that evaluate the pixel grey values in the image to infer three-dimensional scene structure, and point spread function based approaches that exploit the effect of the optical system. The text shows how methods which integrate these concepts are able to increase reconstruction accuracy and robustness, describing applications in industrial quality inspection and metrology, human-robot interaction, and remote sensing.
Enterprise Interoperability is the ability of an enterprise or organisation to work with other enterprises or organisations without special effort. It is now recognised that interoperability of systems and thus sharing of information is not sufficient to ensure common understanding between enterprises. Knowledge of information meaning and understanding of how is to be used must also be shared if decision makers distributed between those enterprises in the network want to act consistently and efficiently. Industry's need for Enterprise Interoperability has been one of the significant drivers for research into the Internet of the Future. EI research will embrace and extend contributions from the Internet of Things and the Internet of Services, and will go on to drive the future needs for Internets of People, Processes, and Knowledge.
Computer games have become a major cultural and economic force, and a subject of extensive academic interest. Up until now, however, computer games have received relatively little attention from philosophy. Seeking to remedy this, the present collection of newly written papers by philosophers and media researchers addresses a range of philosophical questions related to three issues of crucial importance for understanding the phenomenon of computer games: the nature of gameplay and player experience, the moral evaluability of player and avatar actions, and the reality status of the gaming environment. By doing so, the book aims to establish the philosophy of computer games as an important strand of computer games research, and as a separate field of philosophical inquiry. The book is required reading for anyone with an academic or professional interest in computer games, and will also be of value to readers curious about the philosophical issues raised by contemporary digital culture.
Advancements in digital sensor technology, digital image analysis techniques, as well as computer software and hardware have brought together the fields of computer vision and photogrammetry, which are now converging towards sharing, to a great extent, objectives and algorithms. The potential for mutual benefits by the close collaboration and interaction of these two disciplines is great, as photogrammetric know-how can be aided by the most recent image analysis developments in computer vision, while modern quantitative photogrammetric approaches can support computer vision activities. Devising methodologies for automating the extraction of man-made objects (e.g. buildings, roads) from digital aerial or satellite imagery is an application where this cooperation and mutual support is already reaping benefits. The valuable spatial information collected using these interdisciplinary techniques is of improved qualitative and quantitative accuracy. This book offers a comprehensive selection of high-quality and in-depth contributions from world-wide leading research institutions, treating theoretical as well as implementational issues, and representing the state-of-the-art on this subject among the photogrammetric and computer vision communities.
Robotics and autonomous systems can aid disabled individuals in daily living or make a workplace more productive, but these tools are only as effective as the technology behind them. Robotic systems must be able to accurately identify and act upon elements in their environment to be effective in performing their duties. Innovative Research in Attention Modeling and Computer Vision Applications explores the latest research in image processing and pattern recognition for use in robotic real-time cryptography and surveillance applications. This book provides researchers, students, academicians, software designers, and application developers with next-generation insight into the use of computer vision technologies in a variety of industries and endeavors. This premier reference work includes chapters on topics ranging from biometric and facial recognition technologies, to digital image and video watermarking, among many others.
Measurement of Image Velocity presents a computational framework for computing motion information from sequences of images. Its specific goal is the measurement of image velocity (or optical flow), the projection of 3-D object motion onto the 2-D image plane. The formulation of the problem emphasizes the geometric and photometric properties of image formation, and the occurrence of multiple image velocities caused, for example, by specular reflections, shadows, or transparency. The method proposed for measuring image velocity is based on the phase behavior in the output of velocity-tuned filters. Extensive experimental work is used to show that phase can be a reliable source of pure image translation, small geometric deformation, smooth contrast variations, and multiple local velocities. Extensive theorectical analysis is used to explain the robustness of phase with respect to deviations from image translation, and to detect situations in which phase becomes unstable. The results indicate that optical flow may be extracted reliably for computing egomotion and structure from motion. The monograph also contains a review of other techniques and frequency analysis applied to image sequences, and it discusses the closely related topics of zero-crossing tracking, gradient-based methods, and the measurement of binocular disparity. The work is relevant to those studying machine vision and visual perception.
Image segmentation is generally the first task in any automated image understanding application, such as autonomous vehicle navigation, object recognition, photointerpretation, etc. All subsequent tasks, such as feature extraction, object detection, and object recognition, rely heavily on the quality of segmentation. One of the fundamental weaknesses of current image segmentation algorithms is their inability to adapt the segmentation process as real-world changes are reflected in the image. Only after numerous modifications to an algorithm's control parameters can any current image segmentation technique be used to handle the diversity of images encountered in real-world applications. Genetic Learning for Adaptive Image Segmentation presents the first closed-loop image segmentation system that incorporates genetic and other algorithms to adapt the segmentation process to changes in image characteristics caused by variable environmental conditions, such as time of day, time of year, weather, etc. Image segmentation performance is evaluated using multiple measures of segmentation quality. These quality measures include global characteristics of the entire image as well as local features of individual object regions in the image. This adaptive image segmentation system provides continuous adaptation to normal environmental variations, exhibits learning capabilities, and provides robust performance when interacting with a dynamic environment. This research is directed towards adapting the performance of a well known existing segmentation algorithm (Phoenix) across a wide variety of environmental conditions which cause changes in the image characteristics. The book presents a large number of experimental results and compares performance with standard techniques used in computer vision for both consistency and quality of segmentation results. These results demonstrate, (a) the ability to adapt the segmentation performance in both indoor and outdoor color imagery, and (b) that learning from experience can be used to improve the segmentation performance over time.
The computational modelling of deformations has been actively studied for the last thirty years. This is mainly due to its large range of applications that include computer animation, medical imaging, shape estimation, face deformation as well as other parts of the human body, and object tracking. In addition, these advances have been supported by the evolution of computer processing capabilities, enabling realism in a more sophisticated way. This book encompasses relevant works of expert researchers in the field of deformation models and their applications. The book is divided into two main parts. The first part presents recent object deformation techniques from the point of view of computer graphics and computer animation. The second part of this book presents six works that study deformations from a computer vision point of view with a common characteristic: deformations are applied in real world applications. The primary audience for this work are researchers from different multidisciplinary fields, such as those related with Computer Graphics, Computer Vision, Computer Imaging, Biomedicine, Bioengineering, Mathematics, Physics, Medical Imaging and Medicine.
Exploration of Visual Data presents latest research efforts in the area of content-based exploration of image and video data. The main objective is to bridge the semantic gap between high-level concepts in the human mind and low-level features extractable by the machines. The two key issues emphasized are "content-awareness" and "user-in-the-loop." The authors provide a comprehensive review on algorithms for visual feature extraction based on color, texture, shape, and structure, and techniques for incorporating such information to aid browsing, exploration, search, and streaming of image and video data. They also discuss issues related to the mixed use of textual and low-level visual features to facilitate more effective access of multimedia data. To bridge the semantic gap, significant recent research efforts have also been put on learning during user interactions, which is also known as "relevance feedback." The difficulty and challenge also come from the personalized information need of each user and a small amount of feedbacks the machine could obtain through real-time user interaction. The authors present and discuss several recently proposed classification and learning techniques that are specifically designed for this problem, with kernel- and boosting-based approaches for nonlinear extensions. Exploration of Visual Data provides state-of-the-art materials on the topics of content-based description of visual data, content-based low-bitrate video streaming, and latest asymmetric and nonlinear relevance feedback algorithms, which to date are unpublished. Exploration of Visual Data will be of interest to researchers, practitioners, and graduate-level students in theareas of multimedia information systems, multimedia databases, computer vision, machine learning.
This book covers dynamic simulation of deformable objects, which is one of the most challenging tasks in computer graphics and visualization. It focuses on the simulation of deformable models with anisotropic materials, one of the less common approaches in the existing research. Both physically-based and geometrically-based approaches are examined. The authors start with transversely isotropic materials for the simulation of deformable objects with fibrous structures. Next, they introduce a fiber-field incorporated corotational finite element model (CLFEM) that works directly with a constitutive model of transversely isotropic material. A smooth fiber-field is used to establish the local frames for each element. To introduce deformation simulation for orthotropic materials, an orthotropic deformation controlling frame-field is conceptualized and a frame construction tool is developed for users to define the desired material properties. The orthotropic frame-field is coupled with the CLFEM model to complete an orthotropic deformable model. Finally, the authors present an integrated real-time system for animation of skeletal characters with anisotropic tissues. To solve the problems of volume distortion and high computational costs, a strain-based PBD framework for skeletal animation is explained; natural secondary motion of soft tissues is another benefit. The book is written for those researchers who would like to develop their own algorithms. The key mathematical and computational concepts are presented together with illustrations and working examples. It can also be used as a reference book for graduate students and senior undergraduates in the areas of computer graphics, computer animation, and virtual reality. Academics, researchers, and professionals will find this to be an exceptional resource.
This book provides comprehensive, state-of-the art coverage of photorefractive organic compounds, a class of material with the ability to change their index of refraction upon illumination. The change is both dynamic and reversible. Dynamic because no external processing is required for the index modulation to be revealed, and reversible because the index change can be modified or suppressed by altering the illumination pattern. These properties make photorefractive materials very attractive candidates for many applications such as image restoration, correlation, beam conjugation, non-destructive testing, data storage, imaging through scattering media, holographic imaging and display. The field of photorefractive organic material is also closely related to organic photovoltaic and light emitting diode (OLED), which makes new discoveries in one field applicable to others.
The recent emergence of Local Binary Patterns (LBP) has led to significant progress in applying texture methods to various computer vision problems and applications. The focus of this research has broadened from 2D textures to 3D textures and spatiotemporal (dynamic) textures. Also, where texture was once utilized for applications such as remote sensing, industrial inspection and biomedical image analysis, the introduction of LBP-based approaches have provided outstanding results in problems relating to face and activity analysis, with future scope for face and facial expression recognition, biometrics, visual surveillance and video analysis. "Computer Vision Using Local Binary Patterns" provides a detailed description of the LBP methods and their variants both in spatial and spatiotemporal domains. This comprehensive reference also provides an excellent overview as to how texture methods can be utilized for solving different kinds of computer vision and image analysis problems. Source codes of the basic LBP algorithms, demonstrations, some databases and a comprehensive LBP bibliography can be found from an accompanying web site. Topics include: local binary patterns and their variants in spatial and spatiotemporal domains, texture classification and segmentation, description of interest regions, applications in image retrieval and 3D recognition - Recognition and segmentation of dynamic textures, background subtraction, recognition of actions, face analysis using still images and image sequences, visual speech recognition and LBP in various applications. Written by pioneers of LBP, this book is an essential resource for researchers, professional engineers and graduate students in computer vision, image analysis and pattern recognition. The book will also be of interest to all those who work with specific applications of machine vision.
This edited volume addresses a subject which has been discussed inten sively in the computer vision community for several years. Performance characterization and evaluation of computer vision algorithms are of key importance, particularly with respect to the configuration of reliable and ro bust computer vision systems as well as the dissemination of reconfigurable systems in novel application domains. Although a plethora of literature on this subject is available for certain' areas of computer vision, the re search community still faces a lack of a well-grounded, generally accepted, and--eventually-standardized methods. The range of fundamental problems encoIl passes the value of synthetic images in experimental computer vision, the selection of a representative set of real images related to specific domains and tasks, the definition of ground truth given different tasks and applications, the design of experimental test beds, the analysis of algorithms with respect to general characteristics such as complexity, resource consumption, convergence, stability, or range of admissible input data, the definition and analysis of performance measures for classes of algorithms, the role of statistics-based performance measures, the generation of data sheets with performance measures of algorithms sup porting the system engineer in his configuration problem, and the validity of model assumptions for specific applications of computer vision."
Bayesian Approach to Image Interpretation will interest anyone working in image interpretation. It is complete in itself and includes background material. This makes it useful for a novice as well as for an expert. It reviews some of the existing probabilistic methods for image interpretation and presents some new results. Additionally, there is extensive bibliography covering references in varied areas. For a researcher in this field, the material on synergistic integration of segmentation and interpretation modules and the Bayesian approach to image interpretation will be beneficial. For a practicing engineer, the procedure for generating knowledge base, selecting initial temperature for the simulated annealing algorithm, and some implementation issues will be valuable. New ideas introduced in the book include: New approach to image interpretation using synergism between the segmentation and the interpretation modules. A new segmentation algorithm based on multiresolution analysis. Novel use of the Bayesian networks (causal networks) for image interpretation. Emphasis on making the interpretation approach less dependent on the knowledge base and hence more reliable by modeling the knowledge base in a probabilistic framework. Useful in both the academic and industrial research worlds, Bayesian Approach to Image Interpretation may also be used as a textbook for a semester course in computer vision or pattern recognition.
This timely book presents Applications in Recommender Systems which are making recommendations using machine learning algorithms trained via examples of content the user likes or dislikes. Recommender systems built on the assumption of availability of both positive and negative examples do not perform well when negative examples are rare. It is exactly this problem that the authors address in the monograph at hand. Specifically, the books approach is based on one-class classification methodologies that have been appearing in recent machine learning research. The blending of recommender systems and one-class classification provides a new very fertile field for research, innovation and development with potential applications in "big data" as well as "sparse data" problems. The book will be useful to researchers, practitioners and graduate students dealing with problems of extensive and complex data. It is intended for both the expert/researcher in the fields of Pattern Recognition, Machine Learning and Recommender Systems, as well as for the general reader in the fields of Applied and Computer Science who wishes to learn more about the emerging discipline of Recommender Systems and their applications. Finally, the book provides an extended list of bibliographic references which covers the relevant literature completely.
This seminal book is a primer on geometry-driven, nonlinear diffusion as a promising new paradigm for vision, with an emphasis on the tutorial. It gives a thorough overview of current linear and nonlinear scale-space theory, presenting many viewpoints such as the variational approach, curve evolution and nonlinear diffusion equations. The book is meant for computer vision scientists and students, with a computer science, mathematics or physics background. Appendices explain the terminology. Many illustrated applications are given, e.g. in medical imaging, vector valued (or coupled) diffusion, general image enhancement (e.g. edge preserving noise suppression) and modeling of the human front-end visual system. Some examples are given to implement the methods in modern computer-algebra systems. From the Preface by Jan J. Koenderink: I have read through the manuscript of this book in fascination. Most of the approaches that have been explored to tweak scale-space into practical tools are represented here. It is easy to appreciate how both the purist and the engineer find problems of great interest in this area. The book is certainly unique in its scope and has appeared at a time where this field is booming and newcomers can still potentially leave their imprint on the core corpus of scale related methods that still slowly emerge. As such the book is a very timely one. It is quite evident that it would be out of the question to compile anything like a textbook at this stage: this book is a snapshot of the field that manages to capture its current state very well and in a most lively fashion. I can heartily recommend its reading to anyone interested in the issues of image structure, scale andresolution.'
This book presents contributions in the field of computational intelligence for the purpose of image analysis. The chapters discuss how problems such as image segmentation, edge detection, face recognition, feature extraction, and image contrast enhancement can be solved using techniques such as genetic algorithms and particle swarm optimization. The contributions provide a multidimensional approach, and the book will be useful for researchers in computer science, electrical engineering, and information technology.
Bringing together key researchers in disciplines ranging from visualization and image processing to applications in structural mechanics, fluid dynamics, elastography, and numerical mathematics, the workshop that generated this edited volume was the third in the successful Dagstuhl series. Its aim, reflected in the quality and relevance of the papers presented, was to foster collaboration and fresh lines of inquiry in the analysis and visualization of tensor fields, which offer a concise model for numerous physical phenomena. Despite their utility, there remains a dearth of methods for studying all but the simplest ones, a shortage the workshops aim to address. Documenting the latest progress and open research questions in tensor field analysis, the chapters reflect the excitement and inspiration generated by this latest Dagstuhl workshop, held in July 2009. The topics they address range from applications of the analysis of tensor fields to purer research into their mathematical and analytical properties. They show how cooperation and the sharing of ideas and data between those engaged in pure and applied research can open new vistas in the study of tensor fields. "
This open access book focuses on processing, modeling, and visualization of anisotropy information, which are often addressed by employing sophisticated mathematical constructs such as tensors and other higher-order descriptors. It also discusses adaptations of such constructs to problems encountered in seemingly dissimilar areas of medical imaging, physical sciences, and engineering. Featuring original research contributions as well as insightful reviews for scientists interested in handling anisotropy information, it covers topics such as pertinent geometric and algebraic properties of tensors and tensor fields, challenges faced in processing and visualizing different types of data, statistical techniques for data processing, and specific applications like mapping white-matter fiber tracts in the brain. The book helps readers grasp the current challenges in the field and provides information on the techniques devised to address them. Further, it facilitates the transfer of knowledge between different disciplines in order to advance the research frontiers in these areas. This multidisciplinary book presents, in part, the outcomes of the seventh in a series of Dagstuhl seminars devoted to visualization and processing of tensor fields and higher-order descriptors, which was held in Dagstuhl, Germany, on October 28-November 2, 2018.
A key element of any modern video codec is the efficient exploitation of temporal redundancy via motion-compensated prediction. In this book, a novel paradigm of representing and employing motion information in a video compression system is described that has several advantages over existing approaches. Traditionally, motion is estimated, modelled, and coded as a vector field at the target frame it predicts. While this "prediction-centric" approach is convenient, the fact that the motion is "attached" to a specific target frame implies that it cannot easily be re-purposed to predict or synthesize other frames, which severely hampers temporal scalability. In light of this, the present book explores the possibility of anchoring motion at reference frames instead. Key to the success of the proposed "reference-based" anchoring schemes is high quality motion inference, which is enabled by the use of a more "physical" motion representation than the traditionally employed "block" motion fields. The resulting compression system can support computationally efficient, high-quality temporal motion inference, which requires half as many coded motion fields as conventional codecs. Furthermore, "features" beyond compressibility - including high scalability, accessibility, and "intrinsic" framerate upsampling - can be seamlessly supported. These features are becoming ever more relevant as the way video is consumed continues shifting from the traditional broadcast scenario to interactive browsing of video content over heterogeneous networks. This book is of interest to researchers and professionals working in multimedia signal processing, in particular those who are interested in next-generation video compression. Two comprehensive background chapters on scalable video compression and temporal frame interpolation make the book accessible for students and newcomers to the field.
By discussing topics such as shape representations, relaxation theory and optimal transport, trends and synergies of mathematical tools required for optimization of geometry and topology of shapes are explored. Furthermore, applications in science and engineering, including economics, social sciences, biology, physics and image processing are covered. Contents Part I Geometric issues in PDE problems related to the infinity Laplace operator Solution of free boundary problems in the presence of geometric uncertainties Distributed and boundary control problems for the semidiscrete Cahn-Hilliard/Navier-Stokes system with nonsmooth Ginzburg-Landau energies High-order topological expansions for Helmholtz problems in 2D On a new phase field model for the approximation of interfacial energies of multiphase systems Optimization of eigenvalues and eigenmodes by using the adjoint method Discrete varifolds and surface approximation Part II Weak Monge-Ampere solutions of the semi-discrete optimal transportation problem Optimal transportation theory with repulsive costs Wardrop equilibria: long-term variant, degenerate anisotropic PDEs and numerical approximations On the Lagrangian branched transport model and the equivalence with its Eulerian formulation On some nonlinear evolution systems which are perturbations of Wasserstein gradient flows Pressureless Euler equations with maximal density constraint: a time-splitting scheme Convergence of a fully discrete variational scheme for a thin-film equatio Interpretation of finite volume discretization schemes for the Fokker-Planck equation as gradient flows for the discrete Wasserstein distance
Accurate Visual Metrology from Single and Multiple Uncalibrated
Images presents novel techniques for constructing three-dimensional
models from bi-dimensional images using virtual reality tools.
Antonio Criminisi develops the mathematical theory of computing
world measurements from single images, and builds up a hierarchy of
novel, flexible techniques to make measurements and reconstruct
three-dimensional scenes from uncalibrated images, paying
particular attention to the accuracy of the reconstruction.
Flow of ions through voltage gated channels can be represented theoretically using stochastic differential equations where the gating mechanism is represented by a Markov model. The flow through a channel can be manipulated using various drugs, and the effect of a given drug can be reflected by changing the Markov model. These lecture notes provide an accessible introduction to the mathematical methods needed to deal with these models. They emphasize the use of numerical methods and provide sufficient details for the reader to implement the models and thereby study the effect of various drugs. Examples in the text include stochastic calcium release from internal storage systems in cells, as well as stochastic models of the transmembrane potential. Well known Markov models are studied and a systematic approach to including the effect of mutations is presented. Lastly, the book shows how to derive the optimal properties of a theoretical model of a drug for a given mutation defined in terms of a Markov model. |
![]() ![]() You may like...
Advanced Methods and Deep Learning in…
E.R. Davies, Matthew Turk
Paperback
R2,664
Discovery Miles 26 640
Functional Brain Mapping: Methods and…
Vassiliy Tsytsarev, Vicky Yamamoto, …
Hardcover
R2,935
Discovery Miles 29 350
Computer-Aided Oral and Maxillofacial…
Jan Egger, Xiaojun Chen
Paperback
R4,617
Discovery Miles 46 170
Machine Learning Techniques for Pattern…
Mohit Dua, Ankit Kumar Jain
Hardcover
R8,843
Discovery Miles 88 430
Handbook of Pediatric Brain Imaging…
Hao Huang, Timothy Roberts
Paperback
R3,658
Discovery Miles 36 580
Handbook of Medical Image Computing and…
S. Kevin Zhou, Daniel Rueckert, …
Hardcover
R4,746
Discovery Miles 47 460
|