![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Computer vision
This proposed text appears to be a good introduction to evolutionary computation for use in applied statistics research. The authors draw from a vast base of knowledge about the current literature in both the design of evolutionary algorithms and statistical techniques. Modern statistical research is on the threshold of solving increasingly complex problems in high dimensions, and the generalization of its methodology to parameters whose estimators do not follow mathematically simple distributions is underway. Many of these challenges involve optimizing functions for which analytic solutions are infeasible. Evolutionary algorithms represent a powerful and easily understood means of approximating the optimum value in a variety of settings. The proposed text seeks to guide readers through the crucial issues of optimization problems in statistical settings and the implementation of tailored methods (including both stand-alone evolutionary algorithms and hybrid crosses of these procedures with standard statistical algorithms like Metropolis-Hastings) in a variety of applications. This book would serve as an excellent reference work for statistical researchers at an advanced graduate level or beyond, particularly those with a strong background in computer science.
In the areas of image processing and computer vision, there is a particular need for software that can, given an unfocused or motion-blurred image, infer the three-dimensional shape of a scene. This book describes the analytical processes that go into designing such software, delineates the options open to programmers, and presents original algorithms. Written for readers with interests in image processing and computer vision and with backgrounds in engineering, science or mathematics, this highly practical text/reference is accessible to advanced students or those with a degree that includes basic linear algebra and calculus courses.
Deformable avatars are virtual humans that deform themselves during motion. This implies facial deformations, body deformations at joints, and global deformations. Simulating deformable avatars ensures a more realistic simulation of virtual humans. The research requires models for capturing of geometrie and kinematic data, the synthesis of the realistic human shape and motion, the parametrisation and motion retargeting, and several appropriate deformation models. Once a deformable avatar has been created and animated, the researcher must model high-level behavior and introduce agent technology. The book can be divided into 5 subtopics: 1. Motion capture and 3D reconstruction 2. Parametrie motion and retargeting 3. Musc1es and deformation models 4. Facial animation and communication 5. High-level behaviors and autonomous agents Most of the papers were presented during the IFIP workshop "DEFORM '2000" that was held at the University of Geneva in December 2000, followed by "A V AT ARS 2000" held at EPFL, Lausanne. The two workshops were sponsored by the "Troisu!me Cycle Romand d'Informatique" and allowed participants to discuss the state of research in these important areas. x Preface We would like to thank IFIP for its support and Yana Lambert from Kluwer Academic Publishers for her advice. Finally, we are very grateful to Zerrin Celebi, who has prepared the edited version of this book and Dr. Laurent Moccozet for his collaboration.
The two-volume set LNAI 8467 and LNAI 8468 constitutes the refereed proceedings of the 13th International Conference on Artificial Intelligence and Soft Computing, ICAISC 2014, held in Zakopane, Poland in June 2014. The 139 revised full papers presented in the volumes, were carefully reviewed and selected from 331 submissions. The 69 papers included in the first volume are focused on the following topical sections: Neural Networks and Their Applications, Fuzzy Systems and Their Applications, Evolutionary Algorithms and Their Applications, Classification and Estimation, Computer Vision, Image and Speech Analysis and Special Session 3: Intelligent Methods in Databases. The 71 papers in the second volume are organized in the following subjects: Data Mining, Bioinformatics, Biometrics and Medical Applications, Agent Systems, Robotics and Control, Artificial Intelligence in Modeling and Simulation, Various Problems of Artificial Intelligence, Special Session 2: Machine Learning for Visual Information Analysis and Security, Special Session 1: Applications and Properties of Fuzzy Reasoning and Calculus and Clustering.
A color time-varying image can be described as a three-dimensional vector (representing the colors in an appropriate color space) defined on a three-dimensional spatiotemporal space. In conventional analog television a one-dimensional signal suitable for transmission over a communication channel is obtained by sampling the scene in the vertical and tem poral directions and by frequency-multiplexing the luminance and chrominance informa tion. In digital processing and transmission systems, sampling is applied in the horizontal direction, too, on a signal which has been already scanned in the vertical and temporal directions or directly in three dimensions when using some solid-state sensor. As a conse quence, in recent years it has been considered quite natural to assess the potential advan tages arising from an entire multidimensional approach to the processing of video signals. As a simple but significant example, a composite color video signal, such as the conven tional PAL or NTSC signal, possesses a three-dimensional spectrum which, by using suitable three-dimensional filters, permits horizontal sampling at a rate which is less than that re quired for correctly sampling the equivalent one-dimensional signal. More recently it has been widely recognized that the improvement of the picture quality in current and advanced television systems requires well-chosen signal processing algorithms which are multidimen sional in nature within the demanding constraints of a real-time implementation.
The Language of Mathematics was awarded the E.W. Beth Dissertation Prize for outstanding dissertations in the fields of logic, language, and information. It innovatively combines techniques from linguistics, philosophy of mathematics, and computation to give the first wide-ranging analysis of mathematical language. It focuses particularly on a method for determining the complete meaning of mathematical texts and on resolving technical deficiencies in all standard accounts of the foundations of mathematics. "The thesis does far more than is required for a PhD: it is more like a lifetime's work packed into three years, and is a truly exceptional achievement." Timothy Gowers
Video segmentation is the most fundamental process for appropriate index ing and retrieval of video intervals. In general, video streams are composed 1 of shots delimited by physical shot boundaries. Substantial work has been done on how to detect such shot boundaries automatically (Arman et aI. , 1993) (Zhang et aI. , 1993) (Zhang et aI. , 1995) (Kobla et aI. , 1997). Through the inte gration of technologies such as image processing, speech/character recognition and natural language understanding, keywords can be extracted and associated with these shots for indexing (Wactlar et aI. , 1996). A single shot, however, rarely carries enough amount of information to be meaningful by itself. Usu ally, it is a semantically meaningful interval that most users are interested in re trieving. Generally, such meaningful intervals span several consecutive shots. There hardly exists any efficient and reliable technique, either automatic or manual, to identify all semantically meaningful intervals within a video stream. Works by (Smith and Davenport, 1992) (Oomoto and Tanaka, 1993) (Weiss et aI. , 1995) (Hjelsvold et aI. , 1996) suggest manually defining all such inter vals in the database in advance. However, even an hour long video may have an indefinite number of meaningful intervals. Moreover, video data is multi interpretative. Therefore, given a query, what is a meaningful interval to an annotator may not be meaningful to the user who issues the query. In practice, manual indexing of meaningful intervals is labour intensive and inadequate.
This book presents novel graph-theoretic methods for complex computer vision and pattern recognition tasks. It presents the application of graph theory to low-level processing of digital images, presents graph-theoretic learning algorithms for high-level computer vision and pattern recognition applications, and provides detailed descriptions of several applications of graph-based methods to real-world pattern recognition tasks.
This book constitutes the refereed proceedings of the International Conference, VISIGRAPP 2011, the Joint Conference on Computer Vision, Theory and Applications (VISAPP), on Imaging Theory and Applications (IMAGAPP), on Computer Graphics Theory and Applications (GRAPP), and on Information Visualization Theory and Applications (IVAPP), held in Vilamoura, Portugal, in March 2011. The 15 revised full papers presented together with one invited paper were carefully reviewed and selected. The papers are organized in topical sections on computer graphics theory and applications; imaging theory and applications; information visualization theory and applications; and computer vision theory and applications.
Ricci Flow for Shape Analysis and Surface Registration introduces the beautiful and profound Ricci flow theory in a discrete setting. By using basic tools in linear algebra and multivariate calculus, readers can deduce all the major theorems in surface Ricci flow by themselves. The authors adapt the Ricci flow theory to practical computational algorithms, apply Ricci flow for shape analysis and surface registration, and demonstrate the power of Ricci flow in many applications in medical imaging, computer graphics, computer vision and wireless sensor network. Due to minimal pre-requisites, this book is accessible to engineers and medical experts, including educators, researchers, students and industry engineers who have an interest in solving real problems related to shape analysis and surface registration.
Image segmentation is generally the first task in any automated image understanding application, such as autonomous vehicle navigation, object recognition, photointerpretation, etc. All subsequent tasks, such as feature extraction, object detection, and object recognition, rely heavily on the quality of segmentation. One of the fundamental weaknesses of current image segmentation algorithms is their inability to adapt the segmentation process as real-world changes are reflected in the image. Only after numerous modifications to an algorithm's control parameters can any current image segmentation technique be used to handle the diversity of images encountered in real-world applications. Genetic Learning for Adaptive Image Segmentation presents the first closed-loop image segmentation system that incorporates genetic and other algorithms to adapt the segmentation process to changes in image characteristics caused by variable environmental conditions, such as time of day, time of year, weather, etc. Image segmentation performance is evaluated using multiple measures of segmentation quality. These quality measures include global characteristics of the entire image as well as local features of individual object regions in the image. This adaptive image segmentation system provides continuous adaptation to normal environmental variations, exhibits learning capabilities, and provides robust performance when interacting with a dynamic environment. This research is directed towards adapting the performance of a well known existing segmentation algorithm (Phoenix) across a wide variety of environmental conditions which cause changes in the image characteristics. The book presents a large number of experimental results and compares performance with standard techniques used in computer vision for both consistency and quality of segmentation results. These results demonstrate, (a) the ability to adapt the segmentation performance in both indoor and outdoor color imagery, and (b) that learning from experience can be used to improve the segmentation performance over time.
Yulia Levakhina gives an introduction to the major challenges of image reconstruction in Digital Tomosynthesis (DT), particularly to the connection of the reconstruction problem with the incompleteness of the DT dataset. The author discusses the factors which cause the formation of limited angle artifacts and proposes how to account for them in order to improve image quality and axial resolution of modern DT. The addressed methods include a weighted non-linear back projection scheme for algebraic reconstruction andnovel dual-axis acquisition geometry. All discussed algorithms and methods are supplemented by detailed illustrations, hints for practical implementation, pseudo-code, simulation results and real patient case examples."
Optical character recognition (OCR) is the most prominent and successful example of pattern recognition to date. There are thousands of research papers and dozens of OCR products. Optical Character Rcognition: An Illustrated Guide to the Frontier offers a perspective on the performance of current OCR systems by illustrating and explaining actual OCR errors. The pictures and analysis provide insight into the strengths and weaknesses of current OCR systems, and a road map to future progress. Optical Character Recognition: An Illustrated Guide to the Frontier will pique the interest of users and developers of OCR products and desktop scanners, as well as teachers and students of pattern recognition, artificial intelligence, and information retrieval. The first chapter compares the character recognition abilities of humans and computers. The next four chapters present 280 illustrated examples of recognition errors, in a taxonomy consisting of Imaging Defects, Similar Symbols, Punctuation, and Typography. These examples were drawn from large-scale tests conducted by the authors. The final chapter discusses possible approaches for improving the accuracy of today's systems, and is followed by an annotated bibliography. Optical Character Recognition: An Illustrated Guide to the Frontier is suitable as a secondary text for a graduate level course on pattern recognition, artificial intelligence, and information retrieval, and as a reference for researchers and practitioners in industry.
At the beginning of the 1990s research started in how to combine soft comput ing with reconfigurable hardware in a quite unique way. One of the methods that was developed has been called evolvable hardware. Thanks to evolution ary algorithms researchers have started to evolve electronic circuits routinely. A number of interesting circuits - with features unreachable by means of con ventional techniques - have been developed. Evolvable hardware is quite pop ular right now; more than fifty research groups are spread out over the world. Evolvable hardware has become a part of the curriculum at some universi ties. Evolvable hardware is being commercialized and there are specialized conferences devoted to evolvable hardware. On the other hand, surprisingly, we can feel the lack of a theoretical background and consistent design methodology in the area. Furthermore, it is quite difficult to implement really innovative and practically successful evolvable systems using contemporary digital reconfigurable technology."
This book contains extended versions of papers presented at the international Conference VIPIMAGE 2009 - ECCOMAS Thematic Conference on Computational Vision and Medical Image, that was held at Faculdade de Engenharia da Universidade do Porto, Portugal, from 14th to 16th of October 2009. This conference was the second ECCOMAS thematic conference on computational vision and medical image processing. It covered topics related to image processing and analysis, medical imaging and computational modelling and simulation, considering their multidisciplinary nature. The book collects the state-of-the-art research, methods and new trends on the subject of computational vision and medical image processing contributing to the development of these knowledge areas.
Mathematical Nonlinear Image Processing deals with a fast growing research area. The development of the subject springs from two factors: (1) the great expansion of nonlinear methods applied to problems in imaging and vision, and (2) the degree to which nonlinear approaches are both using and fostering new developments in diverse areas of mathematics. Mathematical Nonlinear Image Processing will be of interest to people working in the areas of applied mathematics as well as researchers in computer vision. Mathematical Nonlinear Image Processing is an edited volume of original research. It has also been published as a special issue of the Journal of Mathematical Imaging and Vision. (Volume 2, Issue 2/3).
With the ubiquity of new information technology and media, more effective and friendly methods for human computer interaction (HCI) are being developed which do not rely on traditional devices such as keyboards, mice and displays. The first step for any intelligent HCI system is face detection, and one of most friendly HCI systems is hand gesture. Face Detection and Gesture Recognition for Human-Computer Interaction introduces the frontiers of vision-based interfaces for intelligent human computer interaction with focus on two main issues: face detection and gesture recognition. The first part of the book reviews and discusses existing face detection methods, followed by a discussion on future research. Performance evaluation issues on the face detection methods are also addressed. The second part discusses an interesting hand gesture recognition method based on a generic motion segmentation algorithm. The system has been tested with gestures from American Sign Language with promising results. We conclude this book with comments on future work in face detection and hand gesture recognition.Face Detection and Gesture Recognition for Human-Computer Interaction will interest those working in vision-based interfaces for intelligent human computer interaction. It also contains a comprehensive survey on existing face detection methods, which will serve as the entry point for new researchers embarking on such topics. Furthermore, this book also covers in-depth discussion on motion segmentation algorithms and applications, which will benefit more seasoned graduate students or researchers interested in motion pattern recognition.
Traditionally, scientific fields have defined boundaries, and scientists work on research problems within those boundaries. However, from time to time those boundaries get shifted or blurred to evolve new fields. For instance, the original goal of computer vision was to understand a single image of a scene, by identifying objects, their structure, and spatial arrangements. This has been referred to as image understanding. Recently, computer vision has gradually been making the transition away from understanding single images to analyz ing image sequences, or video understanding. Video understanding deals with understanding of video sequences, e. g. , recognition of gestures, activities, fa cial expressions, etc. The main shift in the classic paradigm has been from the recognition of static objects in the scene to motion-based recognition of actions and events. Video understanding has overlapping research problems with other fields, therefore blurring the fixed boundaries. Computer graphics, image processing, and video databases have obvious overlap with computer vision. The main goal of computer graphics is to gener ate and animate realistic looking images, and videos. Researchers in computer graphics are increasingly employing techniques from computer vision to gen erate the synthetic imagery. A good example of this is image-based rendering and modeling techniques, in which geometry, appearance, and lighting is de rived from real images using computer vision techniques. Here the shift is from synthesis to analysis followed by synthesis.
Declarative query interfaces to Sensor Networks (SN) have become a commodity. These interfaces allow access to SN deployed for collecting data using relational queries. However, SN are not confined to data collection, but may track object movement, e.g., wildlife observation or traffic monitoring. While rational approaches are well suited for data collection, research on Moving Object Databases (MOD) has shown that relational operators are unsuitable to express information needs on object movement, i.e., spatio-temporal queries. "Querying Moving Objects Detected by Sensor Networks"studies declarative access to SN that track moving objects. The properties of SN present a straightforward application of MOD, e.g., node failures, limited detection ranges and accuracy which vary over time etc. Furthermore, point sets used to model MOD-entities like regions assume the availability of very accurate knowledge regarding the spatial extend of these entities, assuming such knowledge is unrealistic for most SN. This book is the first that defines a complete set of spatio-temporal operators for SN while taking into account their properties. Based on these operators, we systematically investigate how to derive query results from object detections by SN. Finally, process spatio-temporal queries are shown in SN efficiently, i.e., reducing the communication between nodes. The evaluation shows that the measures reduce communication by 45%-89%."
This book constitutes refereed proceedings of the COST 2102 International Training School on Cognitive Behavioural Systems held in Dresden, Germany, in February 2011. The 39 revised full papers presented were carefully reviewed and selected from various submissions. The volume presents new and original research results in the field of human-machine interaction inspired by cognitive behavioural human-human interaction features. The themes covered are on cognitive and computational social information processing, emotional and social believable Human-Computer Interaction (HCI) systems, behavioural and contextual analysis of interaction, embodiment, perception, linguistics, semantics and sentiment analysis in dialogues and interactions, algorithmic and computational issues for the automatic recognition and synthesis of emotional states.
Data mining deals with finding patterns in data that are by
user-definition, interesting and valid. It is an interdisciplinary
area involving databases, machine learning, pattern recognition,
statistics, visualization and others. Independently, data mining and decision support are well-developed research areas, but until now there has been no systematic attempt to integrate them. Data Mining and Decision Support: Integration and Collaboration, written by leading researchers in the field, presents a conceptual framework, plus the methods and tools for integrating the two disciplines and for applying this technology to business problems in a collaborative setting.
Computer and Information Sciences is a unique and comprehensive review of advanced technology and research in the field of Information Technology. It provides an up to date snapshot of research in Europe and the Far East (Hong Kong, Japan and China) in the most active areas of information technology, including Computer Vision, Data Engineering, Web Engineering, Internet Technologies, Bio-Informatics and System Performance Evaluation Methodologies.
Brain imaging brings together the technology, methodology, research questions and approaches of a wide range of scientific fields including physics, statistics, computer science, neuroscience, biology, and engineering. Thus, methodological and technological advances that enable us to obtain measurements, examine relationships across observations, and link these data to neuroscientific hypotheses happen in a highly interdisciplinary environment. The dynamic field of machine learning with its modern approach to data mining provides many relevant approaches for neuroscience and enables the exploration of open questions. This state-of-the-art survey offers a collection of papers from the Workshop on Machine Learning and Interpretation in Neuroimaging, MLINI 2011, held at the 25th Annual Conference on Neural Information Processing, NIPS 2011, in the Sierra Nevada, Spain, in December 2011. Additionally, invited speakers agreed to contribute reviews on various aspects of the field, adding breadth and perspective to the volume. The 32 revised papers were carefully selected from 48 submissions. At the interface between machine learning and neuroimaging the papers aim at shedding some light on the state of the art in this interdisciplinary field. They are organized in topical sections on coding and decoding, neuroscience, dynamcis, connectivity, and probabilistic models and machine learning.
This volume constitutes the refereed proceedings of the Second International Conference on Multimedia and Signal Processing, CMSP 2012, held in Shanghai, China, in December 2012. The 79 full papers included in the volume were selected from 328 submissions from 10 different countries and regions. The papers are organized in topical sections on computer and machine vision, feature extraction, image enhancement and noise filtering, image retrieval, image segmentation, imaging techniques & 3D imaging, pattern recognition, multimedia systems, architecture, and applications, visualization, signal modeling, identification & prediction, speech & language processing, time-frequency signal analysis.
Exploration of Visual Data presents latest research efforts in the area of content-based exploration of image and video data. The main objective is to bridge the semantic gap between high-level concepts in the human mind and low-level features extractable by the machines. The two key issues emphasized are "content-awareness" and "user-in-the-loop". The authors provide a comprehensive review on algorithms for visual feature extraction based on color, texture, shape, and structure, and techniques for incorporating such information to aid browsing, exploration, search, and streaming of image and video data. They also discuss issues related to the mixed use of textual and low-level visual features to facilitate more effective access of multimedia data. Exploration of Visual Data provides state-of-the-art materials on the topics of content-based description of visual data, content-based low-bitrate video streaming, and latest asymmetric and nonlinear relevance feedback algorithms, which to date are unpublished. |
You may like...
Advanced Methods and Deep Learning in…
E.R. Davies, Matthew Turk
Paperback
R2,578
Discovery Miles 25 780
Handbook of Pediatric Brain Imaging…
Hao Huang, Timothy Roberts
Paperback
R3,531
Discovery Miles 35 310
Machine Learning Techniques for Pattern…
Mohit Dua, Ankit Kumar Jain
Hardcover
R7,962
Discovery Miles 79 620
Advanced Machine Vision Paradigms for…
Tapan K. Gandhi, Siddhartha Bhattacharyya, …
Paperback
R3,019
Discovery Miles 30 190
Deep Learning Models for Medical Imaging
K. C. Santosh, Nibaran Das, …
Paperback
R2,049
Discovery Miles 20 490
Infrastructure Computer Vision
Ioannis Brilakis, Carl Thomas Michael Haas
Paperback
R3,039
Discovery Miles 30 390
|