![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Computer vision
This book constitutes the thoroughly refereed post-conference proceedings of the 5th International Joint Conference on Biomedical Engineering Systems and Technologies, BIOSTEC 2012, held in Vilamoura, Portugal, in February 2012. The 26 revised full papers presented together with one invited lecture were carefully reviewed and selected from a total of 522 submissions. The papers cover a wide range of topics and are organized in four general topical sections on biomedical electronics and devices; bioinformatics models, methods and algorithms; bio-inspired systems and signal processing; health informatics.
This book constitutes the refereed proceedings of the Chinese Conference on Image and Graphics Technologies and Applications, IGTA 2013, held in Beijing, China, in April 2013. The 40 papers and posters presented were carefully reviewed and selected from 89 submissions. The papers address issues such as the generation of new ideas, new approaches, new techniques, new applications and new evaluation in the field of image processing and graphics.
Deformable avatars are virtual humans that deform themselves during motion. This implies facial deformations, body deformations at joints, and global deformations. Simulating deformable avatars ensures a more realistic simulation of virtual humans. The research requires models for capturing of geometrie and kinematic data, the synthesis of the realistic human shape and motion, the parametrisation and motion retargeting, and several appropriate deformation models. Once a deformable avatar has been created and animated, the researcher must model high-level behavior and introduce agent technology. The book can be divided into 5 subtopics: 1. Motion capture and 3D reconstruction 2. Parametrie motion and retargeting 3. Musc1es and deformation models 4. Facial animation and communication 5. High-level behaviors and autonomous agents Most of the papers were presented during the IFIP workshop "DEFORM '2000" that was held at the University of Geneva in December 2000, followed by "A V AT ARS 2000" held at EPFL, Lausanne. The two workshops were sponsored by the "Troisu!me Cycle Romand d'Informatique" and allowed participants to discuss the state of research in these important areas. x Preface We would like to thank IFIP for its support and Yana Lambert from Kluwer Academic Publishers for her advice. Finally, we are very grateful to Zerrin Celebi, who has prepared the edited version of this book and Dr. Laurent Moccozet for his collaboration.
A color time-varying image can be described as a three-dimensional vector (representing the colors in an appropriate color space) defined on a three-dimensional spatiotemporal space. In conventional analog television a one-dimensional signal suitable for transmission over a communication channel is obtained by sampling the scene in the vertical and tem poral directions and by frequency-multiplexing the luminance and chrominance informa tion. In digital processing and transmission systems, sampling is applied in the horizontal direction, too, on a signal which has been already scanned in the vertical and temporal directions or directly in three dimensions when using some solid-state sensor. As a conse quence, in recent years it has been considered quite natural to assess the potential advan tages arising from an entire multidimensional approach to the processing of video signals. As a simple but significant example, a composite color video signal, such as the conven tional PAL or NTSC signal, possesses a three-dimensional spectrum which, by using suitable three-dimensional filters, permits horizontal sampling at a rate which is less than that re quired for correctly sampling the equivalent one-dimensional signal. More recently it has been widely recognized that the improvement of the picture quality in current and advanced television systems requires well-chosen signal processing algorithms which are multidimen sional in nature within the demanding constraints of a real-time implementation.
The Language of Mathematics was awarded the E.W. Beth Dissertation Prize for outstanding dissertations in the fields of logic, language, and information. It innovatively combines techniques from linguistics, philosophy of mathematics, and computation to give the first wide-ranging analysis of mathematical language. It focuses particularly on a method for determining the complete meaning of mathematical texts and on resolving technical deficiencies in all standard accounts of the foundations of mathematics. "The thesis does far more than is required for a PhD: it is more like a lifetime's work packed into three years, and is a truly exceptional achievement." Timothy Gowers
Video segmentation is the most fundamental process for appropriate index ing and retrieval of video intervals. In general, video streams are composed 1 of shots delimited by physical shot boundaries. Substantial work has been done on how to detect such shot boundaries automatically (Arman et aI. , 1993) (Zhang et aI. , 1993) (Zhang et aI. , 1995) (Kobla et aI. , 1997). Through the inte gration of technologies such as image processing, speech/character recognition and natural language understanding, keywords can be extracted and associated with these shots for indexing (Wactlar et aI. , 1996). A single shot, however, rarely carries enough amount of information to be meaningful by itself. Usu ally, it is a semantically meaningful interval that most users are interested in re trieving. Generally, such meaningful intervals span several consecutive shots. There hardly exists any efficient and reliable technique, either automatic or manual, to identify all semantically meaningful intervals within a video stream. Works by (Smith and Davenport, 1992) (Oomoto and Tanaka, 1993) (Weiss et aI. , 1995) (Hjelsvold et aI. , 1996) suggest manually defining all such inter vals in the database in advance. However, even an hour long video may have an indefinite number of meaningful intervals. Moreover, video data is multi interpretative. Therefore, given a query, what is a meaningful interval to an annotator may not be meaningful to the user who issues the query. In practice, manual indexing of meaningful intervals is labour intensive and inadequate.
This book presents novel graph-theoretic methods for complex computer vision and pattern recognition tasks. It presents the application of graph theory to low-level processing of digital images, presents graph-theoretic learning algorithms for high-level computer vision and pattern recognition applications, and provides detailed descriptions of several applications of graph-based methods to real-world pattern recognition tasks.
Optical character recognition (OCR) is the most prominent and successful example of pattern recognition to date. There are thousands of research papers and dozens of OCR products. Optical Character Rcognition: An Illustrated Guide to the Frontier offers a perspective on the performance of current OCR systems by illustrating and explaining actual OCR errors. The pictures and analysis provide insight into the strengths and weaknesses of current OCR systems, and a road map to future progress. Optical Character Recognition: An Illustrated Guide to the Frontier will pique the interest of users and developers of OCR products and desktop scanners, as well as teachers and students of pattern recognition, artificial intelligence, and information retrieval. The first chapter compares the character recognition abilities of humans and computers. The next four chapters present 280 illustrated examples of recognition errors, in a taxonomy consisting of Imaging Defects, Similar Symbols, Punctuation, and Typography. These examples were drawn from large-scale tests conducted by the authors. The final chapter discusses possible approaches for improving the accuracy of today's systems, and is followed by an annotated bibliography. Optical Character Recognition: An Illustrated Guide to the Frontier is suitable as a secondary text for a graduate level course on pattern recognition, artificial intelligence, and information retrieval, and as a reference for researchers and practitioners in industry.
This proposed text appears to be a good introduction to evolutionary computation for use in applied statistics research. The authors draw from a vast base of knowledge about the current literature in both the design of evolutionary algorithms and statistical techniques. Modern statistical research is on the threshold of solving increasingly complex problems in high dimensions, and the generalization of its methodology to parameters whose estimators do not follow mathematically simple distributions is underway. Many of these challenges involve optimizing functions for which analytic solutions are infeasible. Evolutionary algorithms represent a powerful and easily understood means of approximating the optimum value in a variety of settings. The proposed text seeks to guide readers through the crucial issues of optimization problems in statistical settings and the implementation of tailored methods (including both stand-alone evolutionary algorithms and hybrid crosses of these procedures with standard statistical algorithms like Metropolis-Hastings) in a variety of applications. This book would serve as an excellent reference work for statistical researchers at an advanced graduate level or beyond, particularly those with a strong background in computer science.
Cross disciplinary biometric systems help boost the performance of the conventional systems. Not only is the recognition accuracy significantly improved, but also the robustness of the systems is greatly enhanced in the challenging environments, such as varying illumination conditions. By leveraging the cross disciplinary technologies, face recognition systems, fingerprint recognition systems, iris recognition systems, as well as image search systems all benefit in terms of recognition performance. Take face recognition for an example, which is not only the most natural way human beings recognize the identity of each other, but also the least privacy-intrusive means because people show their face publicly every day. Face recognition systems display superb performance when they capitalize on the innovative ideas across color science, mathematics, and computer science (e.g., pattern recognition, machine learning, and image processing). The novel ideas lead to the development of new color models and effective color features in color science; innovative features from wavelets and statistics, and new kernel methods and novel kernel models in mathematics; new discriminant analysis frameworks, novel similarity measures, and new image analysis methods, such as fusing multiple image features from frequency domain, spatial domain, and color domain in computer science; as well as system design, new strategies for system integration, and different fusion strategies, such as the feature level fusion, decision level fusion, and new fusion strategies with novel similarity measures.
Ricci Flow for Shape Analysis and Surface Registration introduces the beautiful and profound Ricci flow theory in a discrete setting. By using basic tools in linear algebra and multivariate calculus, readers can deduce all the major theorems in surface Ricci flow by themselves. The authors adapt the Ricci flow theory to practical computational algorithms, apply Ricci flow for shape analysis and surface registration, and demonstrate the power of Ricci flow in many applications in medical imaging, computer graphics, computer vision and wireless sensor network. Due to minimal pre-requisites, this book is accessible to engineers and medical experts, including educators, researchers, students and industry engineers who have an interest in solving real problems related to shape analysis and surface registration.
Yulia Levakhina gives an introduction to the major challenges of image reconstruction in Digital Tomosynthesis (DT), particularly to the connection of the reconstruction problem with the incompleteness of the DT dataset. The author discusses the factors which cause the formation of limited angle artifacts and proposes how to account for them in order to improve image quality and axial resolution of modern DT. The addressed methods include a weighted non-linear back projection scheme for algebraic reconstruction andnovel dual-axis acquisition geometry. All discussed algorithms and methods are supplemented by detailed illustrations, hints for practical implementation, pseudo-code, simulation results and real patient case examples."
This book constitutes the proceedings of the 14th Pacific-Rim Conference on Multimedia, PCM 2013, held in Nanjing, China, in December 2013. The 30 revised full papers and 27 poster papers presented were carefully reviewed and selected from 153 submissions. The papers cover a wide range of topics in the area of multimedia content analysis, multimedia signal processing and communications and multimedia applications and services.
Image segmentation is generally the first task in any automated image understanding application, such as autonomous vehicle navigation, object recognition, photointerpretation, etc. All subsequent tasks, such as feature extraction, object detection, and object recognition, rely heavily on the quality of segmentation. One of the fundamental weaknesses of current image segmentation algorithms is their inability to adapt the segmentation process as real-world changes are reflected in the image. Only after numerous modifications to an algorithm's control parameters can any current image segmentation technique be used to handle the diversity of images encountered in real-world applications. Genetic Learning for Adaptive Image Segmentation presents the first closed-loop image segmentation system that incorporates genetic and other algorithms to adapt the segmentation process to changes in image characteristics caused by variable environmental conditions, such as time of day, time of year, weather, etc. Image segmentation performance is evaluated using multiple measures of segmentation quality. These quality measures include global characteristics of the entire image as well as local features of individual object regions in the image. This adaptive image segmentation system provides continuous adaptation to normal environmental variations, exhibits learning capabilities, and provides robust performance when interacting with a dynamic environment. This research is directed towards adapting the performance of a well known existing segmentation algorithm (Phoenix) across a wide variety of environmental conditions which cause changes in the image characteristics. The book presents a large number of experimental results and compares performance with standard techniques used in computer vision for both consistency and quality of segmentation results. These results demonstrate, (a) the ability to adapt the segmentation performance in both indoor and outdoor color imagery, and (b) that learning from experience can be used to improve the segmentation performance over time.
In den letzten Jahren hat sich der Workshop "Bildverarbeitung fur die Medizin" durch erfolgreiche Veranstaltungen etabliert. Ziel ist auch 2014 wieder die Darstellung aktueller Forschungsergebnisse und die Vertiefung der Gesprache zwischen Wissenschaftlern, Industrie und Anwendern. Die Beitrage dieses Bandes - einige davon in englischer Sprache - umfassen alle Bereiche der medizinischen Bildverarbeitung, insbesondere Bildgebung und -akquisition, Molekulare Bildgebung, Visualisierung und Animation, Bildsegmentierung und -fusion, Anatomische Atlanten, Zeitreihenanalysen, Biomechanische Modellierung, Klinische Anwendung computerunterstutzter Systeme, Validierung und Qualitatssicherung u.v.m.
This book contains extended versions of papers presented at the international Conference VIPIMAGE 2009 - ECCOMAS Thematic Conference on Computational Vision and Medical Image, that was held at Faculdade de Engenharia da Universidade do Porto, Portugal, from 14th to 16th of October 2009. This conference was the second ECCOMAS thematic conference on computational vision and medical image processing. It covered topics related to image processing and analysis, medical imaging and computational modelling and simulation, considering their multidisciplinary nature. The book collects the state-of-the-art research, methods and new trends on the subject of computational vision and medical image processing contributing to the development of these knowledge areas.
At the beginning of the 1990s research started in how to combine soft comput ing with reconfigurable hardware in a quite unique way. One of the methods that was developed has been called evolvable hardware. Thanks to evolution ary algorithms researchers have started to evolve electronic circuits routinely. A number of interesting circuits - with features unreachable by means of con ventional techniques - have been developed. Evolvable hardware is quite pop ular right now; more than fifty research groups are spread out over the world. Evolvable hardware has become a part of the curriculum at some universi ties. Evolvable hardware is being commercialized and there are specialized conferences devoted to evolvable hardware. On the other hand, surprisingly, we can feel the lack of a theoretical background and consistent design methodology in the area. Furthermore, it is quite difficult to implement really innovative and practically successful evolvable systems using contemporary digital reconfigurable technology."
Biological visual systems employ massively parallel processing to perform real-world visual tasks in real time. A key to this remarkable performance seems to be that biological systems construct representations of their visual image data at multiple scales. A Pyramid Framework for Early Vision describes a multiscale, or 'pyramid', approach to vision, including its theoretical foundations, a set of pyramid-based modules for image processing, object detection, texture discrimination, contour detection and processing, feature detection and description, and motion detection and tracking. It also shows how these modules can be implemented very efficiently on hypercube-connected processor networks. A Pyramid Framework for Early Vision is intended for both students of vision and vision system designers; it provides a general approach to vision systems design as well as a set of robust, efficient vision modules.
Mathematical Nonlinear Image Processing deals with a fast growing research area. The development of the subject springs from two factors: (1) the great expansion of nonlinear methods applied to problems in imaging and vision, and (2) the degree to which nonlinear approaches are both using and fostering new developments in diverse areas of mathematics. Mathematical Nonlinear Image Processing will be of interest to people working in the areas of applied mathematics as well as researchers in computer vision. Mathematical Nonlinear Image Processing is an edited volume of original research. It has also been published as a special issue of the Journal of Mathematical Imaging and Vision. (Volume 2, Issue 2/3).
With the ubiquity of new information technology and media, more effective and friendly methods for human computer interaction (HCI) are being developed which do not rely on traditional devices such as keyboards, mice and displays. The first step for any intelligent HCI system is face detection, and one of most friendly HCI systems is hand gesture. Face Detection and Gesture Recognition for Human-Computer Interaction introduces the frontiers of vision-based interfaces for intelligent human computer interaction with focus on two main issues: face detection and gesture recognition. The first part of the book reviews and discusses existing face detection methods, followed by a discussion on future research. Performance evaluation issues on the face detection methods are also addressed. The second part discusses an interesting hand gesture recognition method based on a generic motion segmentation algorithm. The system has been tested with gestures from American Sign Language with promising results. We conclude this book with comments on future work in face detection and hand gesture recognition.Face Detection and Gesture Recognition for Human-Computer Interaction will interest those working in vision-based interfaces for intelligent human computer interaction. It also contains a comprehensive survey on existing face detection methods, which will serve as the entry point for new researchers embarking on such topics. Furthermore, this book also covers in-depth discussion on motion segmentation algorithms and applications, which will benefit more seasoned graduate students or researchers interested in motion pattern recognition.
Traditionally, scientific fields have defined boundaries, and scientists work on research problems within those boundaries. However, from time to time those boundaries get shifted or blurred to evolve new fields. For instance, the original goal of computer vision was to understand a single image of a scene, by identifying objects, their structure, and spatial arrangements. This has been referred to as image understanding. Recently, computer vision has gradually been making the transition away from understanding single images to analyz ing image sequences, or video understanding. Video understanding deals with understanding of video sequences, e. g. , recognition of gestures, activities, fa cial expressions, etc. The main shift in the classic paradigm has been from the recognition of static objects in the scene to motion-based recognition of actions and events. Video understanding has overlapping research problems with other fields, therefore blurring the fixed boundaries. Computer graphics, image processing, and video databases have obvious overlap with computer vision. The main goal of computer graphics is to gener ate and animate realistic looking images, and videos. Researchers in computer graphics are increasingly employing techniques from computer vision to gen erate the synthetic imagery. A good example of this is image-based rendering and modeling techniques, in which geometry, appearance, and lighting is de rived from real images using computer vision techniques. Here the shift is from synthesis to analysis followed by synthesis.
This book constitutes refereed proceedings of the COST 2102 International Training School on Cognitive Behavioural Systems held in Dresden, Germany, in February 2011. The 39 revised full papers presented were carefully reviewed and selected from various submissions. The volume presents new and original research results in the field of human-machine interaction inspired by cognitive behavioural human-human interaction features. The themes covered are on cognitive and computational social information processing, emotional and social believable Human-Computer Interaction (HCI) systems, behavioural and contextual analysis of interaction, embodiment, perception, linguistics, semantics and sentiment analysis in dialogues and interactions, algorithmic and computational issues for the automatic recognition and synthesis of emotional states.
Shape Analysis and Retrieval of Multimedia Objects provides a comprehensive survey of the most advanced and powerful shape retrieval techniques used in practice today. In addition, this monograph addresses key methodological issues for evaluation of the shape retrieval methods. Shape Analysis and Retrieval of Multimedia Objects is designed to meet the needs of practitioners and researchers in industry, and graduate-level students in Computer Science.
Declarative query interfaces to Sensor Networks (SN) have become a commodity. These interfaces allow access to SN deployed for collecting data using relational queries. However, SN are not confined to data collection, but may track object movement, e.g., wildlife observation or traffic monitoring. While rational approaches are well suited for data collection, research on Moving Object Databases (MOD) has shown that relational operators are unsuitable to express information needs on object movement, i.e., spatio-temporal queries. "Querying Moving Objects Detected by Sensor Networks"studies declarative access to SN that track moving objects. The properties of SN present a straightforward application of MOD, e.g., node failures, limited detection ranges and accuracy which vary over time etc. Furthermore, point sets used to model MOD-entities like regions assume the availability of very accurate knowledge regarding the spatial extend of these entities, assuming such knowledge is unrealistic for most SN. This book is the first that defines a complete set of spatio-temporal operators for SN while taking into account their properties. Based on these operators, we systematically investigate how to derive query results from object detections by SN. Finally, process spatio-temporal queries are shown in SN efficiently, i.e., reducing the communication between nodes. The evaluation shows that the measures reduce communication by 45%-89%."
Data Management and Internet Computing for Image/Pattern Analysis focuses on the data management issues and Internet computing aspect of image processing and pattern recognition research. The book presents a comprehensive overview of the state of the art, providing detailed case studies that emphasize how image and pattern (IAP) data are distributed and exchanged on sequential and parallel machines, and how the data communication patterns in low- and higher-level IAP computing differ from general numerical computation, what problems they cause and what opportunities they provide. The studies also describe how the images and matrices should be stored, accessed and distributed on different types of machines connected to the Internet, and how Internet resource sharing and data transmission change traditional IAP computing. Data Management and Internet Computing for Image/Pattern Analysis is divided into three parts: the first part describes several software approaches to IAP computing, citing several representative data communication patterns and related algorithms; the second part introduces hardware and Internet resource sharing in which a wide range of computer architectures are described and memory management issues are discussed; and the third part presents applications ranging from image coding, restoration and progressive transmission. Data Management and Internet Computing for Image/Pattern Analysis is an excellent reference for researchers and may be used as a text for advanced courses in image processing and pattern recognition. |
You may like...
Advanced Methods and Deep Learning in…
E.R. Davies, Matthew Turk
Paperback
R2,578
Discovery Miles 25 780
Handbook of Pediatric Brain Imaging…
Hao Huang, Timothy Roberts
Paperback
R3,531
Discovery Miles 35 310
Deep Learning Models for Medical Imaging
K. C. Santosh, Nibaran Das, …
Paperback
R2,049
Discovery Miles 20 490
Advanced Machine Vision Paradigms for…
Tapan K. Gandhi, Siddhartha Bhattacharyya, …
Paperback
R3,019
Discovery Miles 30 190
Computer-Aided Oral and Maxillofacial…
Jan Egger, Xiaojun Chen
Paperback
R4,451
Discovery Miles 44 510
Handbook of Medical Image Computing and…
S. Kevin Zhou, Daniel Rueckert, …
Hardcover
R4,574
Discovery Miles 45 740
|