![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Computer vision
This book highlights new advances in biometrics using deep learning toward deeper and wider background, deeming it "Deep Biometrics". The book aims to highlight recent developments in biometrics using semi-supervised and unsupervised methods such as Deep Neural Networks, Deep Stacked Autoencoder, Convolutional Neural Networks, Generative Adversary Networks, and so on. The contributors demonstrate the power of deep learning techniques in the emerging new areas such as privacy and security issues, cancellable biometrics, soft biometrics, smart cities, big biometric data, biometric banking, medical biometrics, healthcare biometrics, and biometric genetics, etc. The goal of this volume is to summarize the recent advances in using Deep Learning in the area of biometric security and privacy toward deeper and wider applications. Highlights the impact of deep learning over the field of biometrics in a wide area; Exploits the deeper and wider background of biometrics, such as privacy versus security, biometric big data, biometric genetics, and biometric diagnosis, etc.; Introduces new biometric applications such as biometric banking, internet of things, cloud computing, and medical biometrics.
The book presents selected methods for accelerating image retrieval and classification in large collections of images using what are referred to as 'hand-crafted features.' It introduces readers to novel rapid image description methods based on local and global features, as well as several techniques for comparing images. Developing content-based image comparison, retrieval and classification methods that simulate human visual perception is an arduous and complex process. The book's main focus is on the application of these methods in a relational database context. The methods presented are suitable for both general-type and medical images. Offering a valuable textbook for upper-level undergraduate or graduate-level courses on computer science or engineering, as well as a guide for computer vision researchers, the book focuses on techniques that work under real-world large-dataset conditions.
A collection of original contributions by researchers who work at the forefront of a new field, lying at the intersection of computer vision and computer graphics. Several original approaches are presented to the integration of computer vision and graphics techniques to aid in the realistic modelling of objects and scenes, interactive computer graphics, augmented reality, and virtual studios. Numerous applications are also discussed, including urban and archaeological site modelling, modelling dressed humans, medical visualisation, figure and facial animation, real-time 3D teleimmersion telecollaboration, augmented reality as a new user interface concept, and augmented reality in the understanding of underwater scenes.
Presenting the latest technological developments in arts and culture, this volume demonstrates the advantages of a union between art and science. Electronic Visualisation in Arts and Culture is presented in five parts: Imaging and Culture New Art Practice Seeing Motion Interaction and Interfaces Visualising Heritage Electronic Visualisation in Arts and Culture explores a variety of new theory and technologies, including devices and techniques for motion capture for music and performance, advanced photographic techniques, computer generated images derived from different sources, game engine software, airflow to capture the motions of bird flight and low-altitude imagery from airborne devices. The international authors of this book are practising experts from universities, art practices and organisations, research centres and independent research. They describe electronic visualisation used for such diverse aspects of culture as airborne imagery, computer generated art based on the autoimmune system, motion capture for music and for sign language, the visualisation of time and the long term preservation of these materials. Selected from the EVA London conferences from 2009-2012, held in association with the Computer Arts Society of the British Computer Society, the authors have reviewed, extended and fully updated their work for this state-of-the-art volume.
Remote Sensing Digital Image Analysis provides a comprehensive treatment of the methods used for the processing and interpretation of remotely sensed image data. Over the past decade there have been continuing and significant developments in the algorithms used for the analysis of remote sensing imagery, even though many of the fundamentals have substantially remained the same. As with its predecessors this new edition again presents material that has retained value but also includes newer techniques, covered from the perspective of operational remote sensing. The book is designed as a teaching text for the senior undergraduate and postgraduate student, and as a fundamental treatment for those engaged in research using digital image analysis in remote sensing. The presentation level is for the mathematical non-specialist. Since the very great number of operational users of remote sensing come from the earth sciences communities, the text is pitched at a level commensurate with their background. The chapters progress logically through means for the acquisition of remote sensing images, techniques by which they can be corrected, and methods for their interpretation. The prime focus is on applications of the methods, so that worked examples are included and a set of problems conclude each chapter.
This book discusses human emotion recognition from face images using different modalities, highlighting key topics in facial expression recognition, such as the grid formation, distance signature, shape signature, texture signature, feature selection, classifier design, and the combination of signatures to improve emotion recognition. The book explains how six basic human emotions can be recognized in various face images of the same person, as well as those available from benchmark face image databases like CK+, JAFFE, MMI, and MUG. The authors present the concept of signatures for different characteristics such as distance and shape texture, and describe the use of associated stability indices as features, supplementing the feature set with statistical parameters such as range, skewedness, kurtosis, and entropy. In addition, they demonstrate that experiments with such feature choices offer impressive results, and that performance can be further improved by combining the signatures rather than using them individually. There is an increasing demand for emotion recognition in diverse fields, including psychotherapy, biomedicine, and security in government, public and private agencies. This book offers a valuable resource for researchers working in these areas.
One of the most natural representations for modelling spatial objects in computers is discrete representations in the form of a 2D square raster and a 3D cubic grid, since these are naturally obtained by segmenting sensor images. However, the main difficulty is that discrete representations are only approximations of the original objects, and can only be as accurate as the cell size allows. If digitisation is done by real sensor devices, then there is the additional difficulty of sensor distortion. To overcome this, digital shape features must be used that abstract from the inaccuracies of digital representation. In order to ensure the correspondence of continuous and digital features, it is necessary to relate shape features of the underlying continuous objects and to determine the necessary resolution of the digital representation. This volume gives an overview and a classification of the actual approaches to describe the relation between continuous and discrete shape features that are based on digital geometric concepts of discrete structures. Audience: This book will be of interest to researchers and graduate students whose work involves computer vision, image processing, knowledge representation or representation of spatial objects.
Signal Processing in Medicine and Biology: Innovations in Big Data Processing provides an interdisciplinary look at state-of-the-art innovations in biomedical signal processing, especially as it applies to large data sets and machine learning. Chapters are presented with detailed mathematics and complete implementation specifics so that readers can completely master these techniques. The book presents tutorials and examples of successful applications and will appeal to a wide range of professionals, researchers, and students interested in applications of signal processing, medicine, and biology at the intersection between healthcare, engineering, and computer science.
This book presents theories and techniques for perception of textures by computer. Texture is a homogeneous visual pattern that we perceive in surfaces of objects such as textiles, tree barks or stones. Texture analysis is one of the first important steps in computer vision since texture provides important cues to recognize real-world objects. A major part of the book is devoted to two-dimensional analysis of texture patterns by extracting statistical and structural features. It also deals with the shape-from-texture problem which addresses recovery of the three-dimensional surface shapes based on the geometry of projection of the surface texture to the image plane. Perception is still largely mysterious. Realizing a computer vision system that can work in the real world requires more research and ex periment. Capability of textural perception is a key component. We hope this book will contribute to the advancement of computer vision toward robust, useful systems. vVe would like to express our appreciation to Professor Takeo Kanade at Carnegie Mellon University for his encouragement and help in writing this book; to the members of Computer Vision Section at Electrotechni cal Laboratory for providing an excellent research environment; and to Carl W. Harris at Kluwer Academic Publishers for his help in preparing the manuscript."
This book attempts to improve algorithms by novel theories and complex data analysis in different scopes including object detection, remote sensing, data transmission, data fusion, gesture recognition, and edical image processing and analysis. The book is directed to the Ph.D. students, professors, researchers, and software developers working in the areas of digital video processing and computer vision technologies.
Volumetric, or three-dimensional, digital imaging now plays a vital role in many areas of research such as medicine and geology. Medical images acquired by tomographic scanners for instance are often given as a stack of cross-sectional image slices. Such images are called ‘volumetric’ because they depict objects in their entire three-dimensional extent rather than just as a projection onto a two-dimensional image plane. Since huge amounts of volumetric data are continually being produced in many places around the world, techniques for their automatic analysis become ever more important. Written by a computer vision specialist, this clear, detailed account of volumetric image analysis techniques provides a practical approach to the field including the following topics:
Technological advances have helped to enhance disaster resilience through better risk reduction, response, mitigation, rehabilitation and reconstruction. In former times, it was local and traditional knowledge that was mainly relied upon for disaster risk reduction. Much of this local knowledge is still valid in today's world, even though possibly in different forms and contexts, and local knowledge remains a shared part of life within the communities. In contrast, with the advent of science and technology, scientists and engineers have become owners of advanced technologies, which have contributed significantly to reducing disaster risks across the globe. This book analyses emerging technologies and their effects in enhancing disaster resilience. It also evaluates the gaps, challenges, capacities required and the way forward for future disaster management. A wide variety of technologies are addressed, focusing specifically on new technologies such as cyber physical systems, geotechnology, drone, and virtual reality (VR)/ augmented reality (AR). Other sets of emerging advanced technologies including an early warning system and a decision support system are also reported on. Moreover, the book provides a variety of discussions regarding information management, communication, and community resilience at the time of a disaster. This book's coverage of different aspects of new technologies makes it a valuable resource for students, researchers, academics, policymakers, and development practitioners.
This book provides a comprehensive review of all aspects relating to visual quality assessment for stereoscopic images, including statistical mathematics, stereo vision and deep learning. It covers the fundamentals of stereoscopic image quality assessment (SIQA), the relevant engineering problems and research significance, and also offers an overview of the significant advances in visual quality assessment for stereoscopic images, discussing and analyzing the current state-of-the-art in SIQA algorithms, the latest challenges and research directions as well as novel models and paradigms. In addition, a large number of vivid figures and formulas help readers gain a deeper understanding of the foundation and new applications of objective stereoscopic image quality assessment technologies. Reviewing the latest advances, challenges and trends in stereoscopic image quality assessment, this book is a valuable resource for researchers, engineers and graduate students working in related fields, including imaging, displaying and image processing, especially those interested in SIQA research.
This textbook is designed for postgraduate studies in the field of 3D Computer Vision. It also provides a useful reference for industrial practitioners; for example, in the areas of 3D data capture, computer-aided geometric modelling and industrial quality assurance. This second edition is a significant upgrade of existing topics with novel findings. Additionally, it has new material covering consumer-grade RGB-D cameras, 3D morphable models, deep learning on 3D datasets, as well as new applications in the 3D digitization of cultural heritage and the 3D phenotyping of crops. Overall, the book covers three main areas: 3D imaging, including passive 3D imaging, active triangulation 3D imaging, active time-of-flight 3D imaging, consumer RGB-D cameras, and 3D data representation and visualisation; 3D shape analysis, including local descriptors, registration, matching, 3D morphable models, and deep learning on 3D datasets; and 3D applications, including 3D face recognition, cultural heritage and 3D phenotyping of plants. 3D computer vision is a rapidly advancing area in computer science. There are many real-world applications that demand high-performance 3D imaging and analysis and, as a result, many new techniques and commercial products have been developed. However, many challenges remain on how to analyse the captured data in a way that is sufficiently fast, robust and accurate for the application. Such challenges include metrology, semantic segmentation, classification and recognition. Thus, 3D imaging, analysis and their applications remain a highly-active research field that will continue to attract intensive attention from the research community with the ultimate goal of fully automating the 3D data capture, analysis and inference pipeline.
Traditionally, scientific fields have defined boundaries, and scientists work on research problems within those boundaries. However, from time to time those boundaries get shifted or blurred to evolve new fields. For instance, the original goal of computer vision was to understand a single image of a scene, by identifying objects, their structure, and spatial arrangements. This has been referred to as image understanding. Recently, computer vision has gradually been making the transition away from understanding single images to analyzing image sequences, or video understanding. Video understanding deals with understanding of video sequences, e. g., recognition of gestures, activities, facial expressions, etc. The main shift in the classic paradigm has been from the recognition of static objects in the scene to motion-based recognition of actions and events. Video understanding has overlapping research problems with other fields, therefore blurring the fixed boundaries. Computer graphics, image processing, and video databases have obvious overlap with computer vision. The main goal of computer graphics is to gener ate and animate realistic looking images, and videos. Researchers in computer graphics are increasingly employing techniques from computer vision to gen erate the synthetic imagery. A good example of this is image-based rendering and modeling techniques, in which geometry, appearance, and lighting is de rived from real images using computer vision techniques. Here the shift is from synthesis to analysis followed by synthesis."
Hybrid Computational Intelligence: Challenges and Utilities is a comprehensive resource that begins with the basics and main components of computational intelligence. It brings together many different aspects of the current research on HCI technologies, such as neural networks, support vector machines, fuzzy logic and evolutionary computation, while also covering a wide range of applications and implementation issues, from pattern recognition and system modeling, to intelligent control problems and biomedical applications. The book also explores the most widely used applications of hybrid computation as well as the history of their development. Each individual methodology provides hybrid systems with complementary reasoning and searching methods which allow the use of domain knowledge and empirical data to solve complex problems.
Face recognition has been actively studied over the past decade and continues to be a big research challenge. Just recently, researchers have begun to investigate face recognition under unconstrained conditions. Unconstrained Face Recognition provides a comprehensive review of this biometric, especially face recognition from video, assembling a collection of novel approaches that are able to recognize human faces under various unconstrained situations. The underlying basis of these approaches is that, unlike conventional face recognition algorithms, they exploit the inherent characteristics of the unconstrained situation and thus improve the recognition performance when compared with conventional algorithms. Unconstrained Face Recognition is structured to meet the needs of a professional audience of researchers and practitioners in industry. This volume is also suitable for advanced-level students in computer science.
Over the past 15 years, there has been a growing need in the medical image computing community for principled methods to process nonlinear geometric data. Riemannian geometry has emerged as one of the most powerful mathematical and computational frameworks for analyzing such data. Riemannian Geometric Statistics in Medical Image Analysis is a complete reference on statistics on Riemannian manifolds and more general nonlinear spaces with applications in medical image analysis. It provides an introduction to the core methodology followed by a presentation of state-of-the-art methods. Beyond medical image computing, the methods described in this book may also apply to other domains such as signal processing, computer vision, geometric deep learning, and other domains where statistics on geometric features appear. As such, the presented core methodology takes its place in the field of geometric statistics, the statistical analysis of data being elements of nonlinear geometric spaces. The foundational material and the advanced techniques presented in the later parts of the book can be useful in domains outside medical imaging and present important applications of geometric statistics methodology Content includes: The foundations of Riemannian geometric methods for statistics on manifolds with emphasis on concepts rather than on proofs Applications of statistics on manifolds and shape spaces in medical image computing Diffeomorphic deformations and their applications As the methods described apply to domains such as signal processing (radar signal processing and brain computer interaction), computer vision (object and face recognition), and other domains where statistics of geometric features appear, this book is suitable for researchers and graduate students in medical imaging, engineering and computer science.
Presents a strategic perspective and design methodology that guide the process of developing digital products and services that provide 'real experience' to users. Only when the material experienced runs its course to fulfilment is it then regarded as 'real experience' that is distinctively senseful, evaluated as valuable, and harmoniously related to others. Based on the theoretical background of human experience, the book focuses on these three questions: How can we understand the current dominant designs of digital products and services? What are the user experience factors that are critical to provide the real experience? What are the important HCI design elements that can effectively support the various UX factors that are critical to real experience? Design for Experience is intended for people who are interested in the experiences behind the way we use our products and services, for example designers and students interested in interaction, visual graphics and information design or practitioners and entrepreneurs in pursuit of new products or service-based start-ups.
This edited book explores the use of technology to enable us to visualise the life sciences in a more meaningful and engaging way. It will enable those interested in visualisation techniques to gain a better understanding of the applications that can be used in visualisation, imaging and analysis, education, engagement and training. The reader will also be able to learn about the use of visualisation techniques and technologies for the historical and forensic settings. The reader will be able to explore the utilisation of technologies from a number of fields to enable an engaging and meaningful visual representation of the biomedical sciences. The chapters presented in this volume cover such a diverse range of topics, with something for everyone. We present here chapters on technology enhanced learning in neuroanatomy; 3D printing and surgical planning; changes in higher education utilising technology, decolonising the curriculum and visual representations of the human body in education. We also showcase how not to use protective personal equipment inspired by the pandemic; anatomical and historical visualisation of obstetrics and gynaecology; 3D modelling of carpal bones and augmented reality for arachnid phobias for public engagement. In addition, we also present face modelling for surgical education in a multidisciplinary setting, military medical museum 3D digitising of historical pathology specimens and finally computational fluid dynamics.
To enhance the overall viewing experience (for cinema, TV, games, AR/VR) the media industry is continuously striving to improve image quality. Currently the emphasis is on High Dynamic Range (HDR) and Wide Colour Gamut (WCG) technologies, which yield images with greater contrast and more vivid colours. The uptake of these technologies, however, has been hampered by the significant challenge of understanding the science behind visual perception. Vision Models for High Dynamic Range and Wide Colour Gamut Imaging provides university researchers and graduate students in computer science, computer engineering, vision science, as well as industry R&D engineers, an insight into the science and methods for HDR and WCG. It presents the underlying principles and latest practical methods in a detailed and accessible way, highlighting how the use of vision models is a key element of all state-of-the-art methods for these emerging technologies.
The book presents the proceedings of four conferences: The 24th International Conference on Image Processing, Computer Vision, & Pattern Recognition (IPCV'20), The 6th International Conference on Health Informatics and Medical Systems (HIMS'20), The 21st International Conference on Bioinformatics & Computational Biology (BIOCOMP'20), and The 6th International Conference on Biomedical Engineering and Sciences (BIOENG'20). The conferences took place in Las Vegas, NV, USA, July 27-30, 2020, and are part of the larger 2020 World Congress in Computer Science, Computer Engineering, & Applied Computing (CSCE'20), which features 20 major tracks. Authors include academics, researchers, professionals, and students. Presents the proceedings of four conferences as part of the 2020 World Congress in Computer Science, Computer Engineering, & Applied Computing (CSCE'20); Includes the tracks on Image Processing, Computer Vision, & Pattern Recognition, Health Informatics & Medical Systems, Bioinformatics, Computational Biology & Biomedical Engineering; Features papers from IPCV'20, HIMS'20, BIOCOMP'20, and BIOENG'20.
This volume assesses approaches to the construction of computer vision systems. It shows that there is a spectrum of approaches with different degrees of maturity and robustness. The useful exploitation of computer vision in industry and elsewhere and the development of the discipline itself depend on understanding the way these approaches influence one another. The chief topic discussed is autonomy.True autonomy may not be achievable in machines in the near future, and the workshop concluded that it may be more desirable - and is certainly more pragmatic - to leave a person in the processing loop. The second conclusion of the workshop concerns the manner in which a system is designedfor an application. It was agreed that designers should first specify the required functionality, then identify the knowledge appropriate to that task, and finally choose the appropriate techniques and algorithms. The third conclusion concerns the methodologies employed in developing vision systems: craft, engineering, and science are mutually relevant and contribute to one another. The contributors place heavy emphasis on providing the reader with concrete examples of operational systems. The book is based on a workshop held as part of the activities of an ESPRIT Basic Research Action.
This book includes a selection of peer-reviewed papers presented at the 10th China Academic Conference on Printing and Packaging, which was held in Xi'an, China, on November 14-17, 2019. The conference was jointly organized by the China Academy of Printing Technology, Beijing Institute of Graphic Communication, and Shaanxi University of Science and Technology. With 9 keynote talks and 118 papers on graphic communication and packaging technologies, the conference attracted more than 300 scientists. The proceedings cover the latest findings in a broad range of areas, including color science and technology, image processing technology, digital media technology, mechanical and electronic engineering, Information Engineering and Artificial Intelligence Technology, materials and detection, digital process management technology in printing and packaging, and other technologies. As such, the book appeals to university researchers, R&D engineers and graduate students in the graphic arts, packaging, color science, image science, material science, computer science, digital media, and network technology.
As the first book of a three-part series, this book is offered as a tribute to pioneers in vision, such as Bela Julesz, David Marr, King-Sun Fu, Ulf Grenander, and David Mumford. The authors hope to provide foundation and, perhaps more importantly, further inspiration for continued research in vision. This book covers David Marr's paradigm and various underlying statistical models for vision. The mathematical framework herein integrates three regimes of models (low-, mid-, and high-entropy regimes) and provides foundation for research in visual coding, recognition, and cognition. Concepts are first explained for understanding and then supported by findings in psychology and neuroscience, after which they are established by statistical models and associated learning and inference algorithms. A reader will gain a unified, cross-disciplinary view of research in vision and will accrue knowledge spanning from psychology to neuroscience to statistics. |
You may like...
Hiking Beyond Cape Town - 40 Inspiring…
Nina du Plessis, Willie Olivier
Paperback
Introduction To Legal Pluralism In South…
C. Rautenbach
Paperback
(1)
Matthew - A Theological Commentary on…
Anna Case-Winters
Hardcover
|