Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Applications of computing > Artificial intelligence > Computer vision
This book focuses on how machine learning techniques can be used to analyze and make use of one particular category of behavioral biometrics known as the gait biometric. A comprehensive Ground Reaction Force (GRF)-based Gait Biometrics Recognition framework is proposed and validated by experiments. In addition, an in-depth analysis of existing recognition techniques that are best suited for performing footstep GRF-based person recognition is also proposed, as well as a comparison of feature extractors, normalizers, and classifiers configurations that were never directly compared with one another in any previous GRF recognition research. Finally, a detailed theoretical overview of many existing machine learning techniques is presented, leading to a proposal of two novel data processing techniques developed specifically for the purpose of gait biometric recognition using GRF. This book * introduces novel machine-learning-based temporal normalization techniques * bridges research gaps concerning the effect of footwear and stepping speed on footstep GRF-based person recognition * provides detailed discussions of key research challenges and open research issues in gait biometrics recognition* compares biometrics systems trained and tested with the same footwear against those trained and tested with different footwear
This textbook offers a statistical view on the geometry of multiple view analysis, required for camera calibration and orientation and for geometric scene reconstruction based on geometric image features. The authors have backgrounds in geodesy and also long experience with development and research in computer vision, and this is the first book to present a joint approach from the converging fields of photogrammetry and computer vision. Part I of the book provides an introduction to estimation theory, covering aspects such as Bayesian estimation, variance components, and sequential estimation, with a focus on the statistically sound diagnostics of estimation results essential in vision metrology. Part II provides tools for 2D and 3D geometric reasoning using projective geometry. This includes oriented projective geometry and tools for statistically optimal estimation and test of geometric entities and transformations and their rela tions, tools that are useful also in the context of uncertain reasoning in point clouds. Part III is de voted to modelling the geometry of single and multiple cameras, addressing calibration and orienta tion, including statistical evaluation and reconstruction of corresponding scene features and surfaces based on geometric image features. The authors provide algorithms for various geometric computa tion problems in vision metrology, together with mathematical justifications and statistical analysis, thus enabling thorough evaluations. The chapters are self-contained with numerous figures and exer cises, and they are supported by an appendix that explains the basic mathematical notation and a de tailed index. The book can serve as the basis for undergraduate and graduate courses in photogrammetry, com puter vision, and computer graphics. It is also appropriate for researchers, engineers, and software developers in the photogrammetry and GIS industries, particularly those engaged with statistically based geometric computer vision methods.
Image Quality Assessment is well-known for measuring the perceived image degradation of natural scene images but is still an emerging topic for computer-generated images. This book addresses this problem and presents recent advances based on soft computing. It is aimed at students, practitioners and researchers in the field of image processing and related areas such as computer graphics and visualization. In this book, we first clarify the differences between natural scene images and computer-generated images, and address the problem of Image Quality Assessment (IQA) by focusing on the visual perception of noise. Rather than using known perceptual models, we first investigate the use of soft computing approaches, classically used in Artificial Intelligence, as full-reference and reduced-reference metrics. Thus, by creating Learning Machines, such as SVMs and RVMs, we can assess the perceptual quality of a computer-generated image. We also investigate the use of interval-valued fuzzy sets as a no-reference metric. These approaches are treated both theoretically and practically, for the complete process of IQA. The learning step is performed using a database built from experiments with human users and the resulting models can be used for any image computed with a stochastic rendering algorithm. This can be useful for detecting the visual convergence of the different parts of an image during the rendering process, and thus to optimize the computation. These models can also be extended to other applications that handle complex models, in the fields of signal processing and image processing.
Computer vision has become increasingly important and effective in recent years due to its wide-ranging applications in areas as diverse as smart surveillance and monitoring, health and medicine, sports and recreation, robotics, drones, and self-driving cars. Visual recognition tasks, such as image classification, localization, and detection, are the core building blocks of many of these applications, and recent developments in Convolutional Neural Networks (CNNs) have led to outstanding performance in these state-of-the-art visual recognition tasks and systems. As a result, CNNs now form the crux of deep learning algorithms in computer vision. This self-contained guide will benefit those who seek to both understand the theory behind CNNs and to gain hands-on experience on the application of CNNs in computer vision. It provides a comprehensive introduction to CNNs starting with the essential concepts behind neural networks: training, regularization, and optimization of CNNs. The book also discusses a wide range of loss functions, network layers, and popular CNN architectures, reviews the different techniques for the evaluation of CNNs, and presents some popular CNN tools and libraries that are commonly used in computer vision. Further, this text describes and discusses case studies that are related to the application of CNN in computer vision, including image classification, object detection, semantic segmentation, scene understanding, and image generation. This book is ideal for undergraduate and graduate students, as no prior background knowledge in the field is required to follow the material, as well as new researchers, developers, engineers, and practitioners who are interested in gaining a quick understanding of CNN models.
This book constitutes the proceedings of the Second International Symposium on Intelligent Computing Systems, ISICS 2018, held in Merida, Mexico, in March 2018. The 12 papers presented in this volume were carefully reviewed and selected from 28 submissions. They deal with the field of intelligent computing systems focusing on artificial intelligence, computer vision and image processing.
This book presents revised selected papers from the 14th International Forum on Digital TV and Wireless Multimedia Communication, IFTC 2017, held in Shanghai, China, in November 2017. The 46 papers presented in this volume were carefully reviewed and selected from 122 submissions. They were organized in topical sections named: image processing; machine learning; quality assessment; social media; telecommunications; video surveillance; virtual reality; computer vision; and image compression.
This book constitutes the refereed proceedings of the 14th International Conference on Virtual Reality and Augmented Reality, EuroVR 2017, held in Laval, France, in December 2017. The 10 full papers and 2 short papers presented were carefully reviewed and selected from 36 submissions. The papers are organized in four topical sections: interaction models and user studies, visual and haptic real-time rendering, perception and cognition, and rehabilitation and safety.
This book provides beginners in computer graphics and related fields a guide to the concepts, models, and technologies for realistic rendering of material appearance. It provides a complete and thorough overview of reflectance models and acquisition setups, along with providing a selection of the available tools to explore, visualize, and render the reflectance data. Reflectance models are under continuous development, since there is still no straightforward solution for general material representations. Every reflectance model is specific to a class of materials. Hence, each has strengths and weaknesses, which the book highlights in order to help the reader choose the most suitable model for any purpose. The overview of the acquisition setups will provide guidance to a reader who needs to acquire virtual materials and will help them to understand which measurement setup can be useful for a particular purpose, while taking into account the performance and the expected cost derived from the required components. The book also describes several recent open source software solutions, useful for visualizing and manipulating a wide variety of reflectance models and data.
Covariance matrices play important roles in many areas of mathematics, statistics, and machine learning, as well as their applications. In computer vision and image processing, they give rise to a powerful data representation, namely the covariance descriptor, with numerous practical applications. In this book, we begin by presenting an overview of the {\it finite-dimensional covariance matrix} representation approach of images, along with its statistical interpretation. In particular, we discuss the various distances and divergences that arise from the intrinsic geometrical structures of the set of Symmetric Positive Definite (SPD) matrices, namely Riemannian manifold and convex cone structures. Computationally, we focus on kernel methods on covariance matrices, especially using the Log-Euclidean distance. We then show some of the latest developments in the generalization of the finite-dimensional covariance matrix representation to the {\it infinite-dimensional covariance operator} representation via positive definite kernels. We present the generalization of the affine-invariant Riemannian metric and the Log-Hilbert-Schmidt metric, which generalizes the Log-Euclidean distance. Computationally, we focus on kernel methods on covariance operators, especially using the Log-Hilbert-Schmidt distance. Specifically, we present a two-layer kernel machine, using the Log-Hilbert-Schmidt distance and its finite-dimensional approximation, which reduces the computational complexity of the exact formulation while largely preserving its capability. Theoretical analysis shows that, mathematically, the approximate Log-Hilbert-Schmidt distance should be preferred over the approximate Log-Hilbert-Schmidt inner product and, computationally, it should be preferred over the approximate affine-invariant Riemannian distance. Numerical experiments on image classification demonstrate significant improvements of the infinite-dimensional formulation over the finite-dimensional counterpart. Given the numerous applications of covariance matrices in many areas of mathematics, statistics, and machine learning, just to name a few, we expect that the infinite-dimensional covariance operator formulation presented here will have many more applications beyond those in computer vision.
This three volume set, CCIS 771, 772, 773, constitutes the refereed proceedings of the CCF Chinese Conference on Computer Vision, CCCV 2017, held in Tianjin, China, in October 2017. The total of 174 revised full papers presented in three volumes were carefully reviewed and selected from 465 submissions. The papers are organized in the following topical sections: biological vision inspired visual method; biomedical image analysis; computer vision applications; deep neural network; face and posture analysis; image and video retrieval; image color and texture; image composition; image quality assessment and analysis; image restoration; image segmentation and classification; image-based modeling; object detection and classification; object identification; photography and video; robot vision; shape representation and matching; statistical methods and learning; video analysis and event recognition; visual salient detection.
This three volume set, CCIS 771, 772, 773, constitutes the refereed proceedings of the CCF Chinese Conference on Computer Vision, CCCV 2017, held in Tianjin, China, in October 2017. The total of 174 revised full papers presented in three volumes were carefully reviewed and selected from 465 submissions. The papers are organized in the following topical sections: biological vision inspired visual method; biomedical image analysis; computer vision applications; deep neural network; face and posture analysis; image and video retrieval; image color and texture; image composition; image quality assessment and analysis; image restoration; image segmentation and classification; image-based modeling; object detection and classification; object identification; photography and video; robot vision; shape representation and matching; statistical methods and learning; video analysis and event recognition; visual salient detection.
This three volume set, CCIS 771, 772, 773, constitutes the refereed proceedings of the CCF Chinese Conference on Computer Vision, CCCV 2017, held in Tianjin, China, in October 2017. The total of 174 revised full papers presented in three volumes were carefully reviewed and selected from 465 submissions. The papers are organized in the following topical sections: biological vision inspired visual method; biomedical image analysis; computer vision applications; deep neural network; face and posture analysis; image and video retrieval; image color and texture; image composition; image quality assessment and analysis; image restoration; image segmentation and classification; image-based modeling; object detection and classification; object identification; photography and video; robot vision; shape representation and matching; statistical methods and learning; video analysis and event recognition; visual salient detection
Statistical analysis of shapes of 3D objects is an important problem with a wide range of applications. This analysis is difficult for many reasons, including the fact that objects differ in both geometry and topology. In this manuscript, we narrow the problem by focusing on objects with fixed topology, say objects that are diffeomorphic to unit spheres, and develop tools for analyzing their geometries. The main challenges in this problem are to register points across objects and to perform analysis while being invariant to certain shape-preserving transformations. We develop a comprehensive framework for analyzing shapes of spherical objects, i.e., objects that are embeddings of a unit sphere in #x211D;, including tools for: quantifying shape differences, optimally deforming shapes into each other, summarizing shape samples, extracting principal modes of shape variability, and modeling shape variability associated with populations. An important strength of this framework is that it is elastic: it performs alignment, registration, and comparison in a single unified framework, while being invariant to shape-preserving transformations. The approach is essentially Riemannian in the following sense. We specify natural mathematical representations of surfaces of interest, and impose Riemannian metrics that are invariant to the actions of the shape-preserving transformations. In particular, they are invariant to reparameterizations of surfaces. While these metrics are too complicated to allow broad usage in practical applications, we introduce a novel representation, termed square-root normal fields (SRNFs), that transform a particular invariant elastic metric into the standard L(2) metric. As a result, one can use standard techniques from functional data analysis for registering, comparing, and summarizing shapes. Specifically, this results in: pairwise registration of surfaces; computation of geodesic paths encoding optimal deformations; computation of Karcher means and covariances under the shape metric; tangent Principal Component Analysis (PCA) and extraction of dominant modes of variability; and finally, modeling of shape variability using wrapped normal densities. These ideas are demonstrated using two case studies: the analysis of surfaces denoting human bodies in terms of shape and pose variability; and the clustering and classification of the shapes of subcortical brain structures for use in medical diagnosis. This book develops these ideas without assuming advanced knowledge in differential geometry and statistics. We summarize some basic tools from differential geometry in the appendices, and introduce additional concepts and terminology as needed in the individual chapters.
This volume constitutes the refereed proceedings of the 9th International Conference on Multimedia Communications, Services and Security, MCSS 2017, held in Krakow, Poland, in November 2017. The 16 full papers included in the volume were selected from 38 submissions. The papers cover ongoing research activities in the following topics: multimedia services; intelligent monitoring; audio-visual systems; biometric applications; experiments and deployments.
Alexander Schaub examines how a reactive instinctive behavior, similar to instinctive reactions as incorporated by living beings, can be achieved for intelligent mobile robots to extend the classic reasoning approaches. He identifies possible applications for reactive approaches, as they enable a fast response time, increase robustness and have a high abstraction ability, even though reactive methods are not universally applicable. The chosen applications are obstacle avoidance and relative positioning - which can also be utilized for navigation - and a combination of both. The implementation of reactive instinctive behaviors for the identified tasks is then validated in simulation together with real world experiments.
This book constitutes thoroughly revised and selected papers from the 11th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2016, held in Rome, Italy, in February 2016. VISIGRAPP comprises GRAPP, International Conference on Computer Graphics Theory and Applications; IVAPP, International Conference on Information Visualization Theory and Applications; and VISAPP, International Conference on Computer Vision Theory and Applications. The 28 thoroughly revised and extended papers presented in this volume were carefully reviewed and selected from 338 submissions. The book also contains one invited talk in full-paper length. The regular papers were organized in topical sections named: computer graphics theory and applications; information visualization theory and applications; and computer vision theory and applications.
Presents a strategic perspective and design methodology that guide the process of developing digital products and services that provide 'real experience' to users. Only when the material experienced runs its course to fulfilment is it then regarded as 'real experience' that is distinctively senseful, evaluated as valuable, and harmoniously related to others. Based on the theoretical background of human experience, the book focuses on these three questions: How can we understand the current dominant designs of digital products and services? What are the user experience factors that are critical to provide the real experience? What are the important HCI design elements that can effectively support the various UX factors that are critical to real experience? Design for Experience is intended for people who are interested in the experiences behind the way we use our products and services, for example designers and students interested in interaction, visual graphics and information design or practitioners and entrepreneurs in pursuit of new products or service-based start-ups.
Face analysis is essential for a large number of applications such as human-computer interaction or multimedia (e.g. content indexing and retrieval). Although many approaches are under investigation, performance under uncontrolled conditions is still not satisfactory. The variations that impact facial appearance (e.g. pose, expression, illumination, occlusion, motion blur) make it a difficult problem to solve. This book describes the progress towards this goal, from a core building block - landmark detection - to the higher level of micro and macro expression recognition. Specifically, the book addresses the modeling of temporal information to coincide with the dynamic nature of the face. It also includes a benchmark of recent solutions along with details about the acquisition of a dataset for such tasks.
In image processing and computer vision applications such as medical or scientific image data analysis, as well as in industrial scenarios, images are used as input measurement data. It is good scientific practice that proper measurements must be equipped with error and uncertainty estimates. For many applications, not only the measured values but also their errors and uncertainties, should be-and more and more frequently are-taken into account for further processing. This error and uncertainty propagation must be done for every processing step such that the final result comes with a reliable precision estimate. The goal of this book is to introduce the reader to the recent advances from the field of uncertainty quantification and error propagation for computer vision, image processing, and image analysis that are based on partial differential equations (PDEs). It presents a concept with which error propagation and sensitivity analysis can be formulated with a set of basic operations. The approach discussed in this book has the potential for application in all areas of quantitative computer vision, image processing, and image analysis. In particular, it might help medical imaging finally become a scientific discipline that is characterized by the classical paradigms of observation, measurement, and error awareness. This book is comprised of eight chapters. After an introduction to the goals of the book (Chapter 1), we present a brief review of PDEs and their numerical treatment (Chapter 2), PDE-based image processing (Chapter 3), and the numerics of stochastic PDEs (Chapter 4). We then proceed to define the concept of stochastic images (Chapter 5), describe how to accomplish image processing and computer vision with stochastic images (Chapter 6), and demonstrate the use of these principles for accomplishing sensitivity analysis (Chapter 7). Chapter 8 concludes the book and highlights new research topics for the future.
In geometry processing and shape analysis, several applications have been addressed through the properties of the Laplacian spectral kernels and distances, such as commute time, biharmonic, diffusion, and wave distances. Within this context, this book is intended to provide a common background on the definition and computation of the Laplacian spectral kernels and distances for geometry processing and shape analysis. To this end, we define a unified representation of the isotropic and anisotropic discrete Laplacian operator on surfaces and volumes; then, we introduce the associated differential equations, i.e., the harmonic equation, the Laplacian eigenproblem, and the heat equation. Filtering the Laplacian spectrum, we introduce the Laplacian spectral distances, which generalize the commute-time, biharmonic, diffusion, and wave distances, and their discretization in terms of the Laplacian spectrum. As main applications, we discuss the design of smooth functions and the Laplacian smoothing of noisy scalar functions. All the reviewed numerical schemes are discussed and compared in terms of robustness, approximation accuracy, and computational cost, thus supporting the reader in the selection of the most appropriate with respect to shape representation, computational resources, and target application.
The two-volume set CCIS 713 and CCIS 714 contains the extended abstracts of the posters presented during the 19th International Conference on Human-Computer Interaction, HCI International 2017, held in Vancouver, BC, Canada, in July 2017. HCII 2017 received a total of 4340 submissions, of which 1228 papers were accepted for publication after a careful reviewing process. The 177 papers presented in these two volumes were organized in topical sections as follows: Part I: Design and evaluation methods, tools and practices; novel interaction techniques and devices; psychophisiological measuring and monitoring; perception, cognition and emotion in HCI; data analysis and data mining in social media and communication; ergonomics and models in work and training support. Part II: Interaction in virtual and augmented reality; learning, games and gamification; health, well-being and comfort; smart environments; mobile interaction; visual design and visualization; social issues and security in HCI.
The two-volume set CCIS 713 and CCIS 714 contains the extended abstracts of the posters presented during the 19th International Conference on Human-Computer Interaction, HCI International 2017, held in Vancouver, BC, Canada, in July 2017. HCII 2017 received a total of 4340 submissions, of which 1228 papers were accepted for publication after a careful reviewing process. The 177 papers presented in these two volumes were organized in topical sections as follows: Part I: Design and evaluation methods, tools and practices; novel interaction techniques and devices; psychophisiological measuring and monitoring; perception, cognition and emotion in HCI; data analysis and data mining in social media and communication; ergonomics and models in work and training support. Part II: Interaction in virtual and augmented reality; learning, games and gamification; health, well-being and comfort; smart environments; mobile interaction; visual design and visualization; social issues and security in HCI.
This book constitutes the refereed proceedings of the 13th International Conference entitled Beyond Databases, Architectures and Structures, BDAS 2017, held in Ustron, Poland, in May/June 2017.It consists of 44 carefully reviewed papers selected from 118 submissions. The papers are organized in topical sections, namely big data and cloud computing; artificial intelligence, data mining and knowledge discovery; architectures, structures and algorithms for efficient data processing; text mining, natural language processing, ontologies and semantic web; bioinformatics and biological data analysis; industrial applications; data mining tools, optimization and compression.
This synthesis lecture presents an intuitive introduction to the mathematics of motion and deformation in computer graphics. Starting with familiar concepts in graphics, such as Euler angles, quaternions, and affine transformations, we illustrate that a mathematical theory behind these concepts enables us to develop the techniques for efficient/effective creation of computer animation. This book, therefore, serves as a good guidepost to mathematics (differential geometry and Lie theory) for students of geometric modeling and animation in computer graphics. Experienced developers and researchers will also benefit from this book, since it gives a comprehensive overview of mathematical approaches that are particularly useful in character modeling, deformation, and animation. |
You may like...
Gaze Interaction and Applications of Eye…
Paivi Majaranta, Hirotaka Aoki, …
Hardcover
R6,431
Discovery Miles 64 310
Advances in Human and Machine Navigation…
Rastislav Roka
Hardcover
Advanced Signal Processing for Industry…
Irshad Ahmad Ansari, Varun Bajaj
Hardcover
R3,230
Discovery Miles 32 300
Machine Learning Techniques for Pattern…
Mohit Dua, Ankit Kumar Jain
Hardcover
R8,415
Discovery Miles 84 150
Concepts and Real-Time Applications of…
Smriti Srivastava, Manju Khari, …
Hardcover
R3,620
Discovery Miles 36 200
|