![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Artificial intelligence > Computer vision
This book explores mathematics in a wide variety of applications, ranging from problems in electronics, energy and the environment, to mechanics and mechatronics. The book gathers 81 contributions submitted to the 20th European Conference on Mathematics for Industry, ECMI 2018, which was held in Budapest, Hungary in June 2018. The application areas include: Applied Physics, Biology and Medicine, Cybersecurity, Data Science, Economics, Finance and Insurance, Energy, Production Systems, Social Challenges, and Vehicles and Transportation. In turn, the mathematical technologies discussed include: Combinatorial Optimization, Cooperative Games, Delay Differential Equations, Finite Elements, Hamilton-Jacobi Equations, Impulsive Control, Information Theory and Statistics, Inverse Problems, Machine Learning, Point Processes, Reaction-Diffusion Equations, Risk Processes, Scheduling Theory, Semidefinite Programming, Stochastic Approximation, Spatial Processes, System Identification, and Wavelets. The goal of the European Consortium for Mathematics in Industry (ECMI) conference series is to promote interaction between academia and industry, leading to innovations in both fields. These events have attracted leading experts from business, science and academia, and have promoted the application of novel mathematical technologies to industry. They have also encouraged industrial sectors to share challenging problems where mathematicians can provide fresh insights and perspectives. Lastly, the ECMI conferences are one of the main forums in which significant advances in industrial mathematics are presented, bringing together prominent figures from business, science and academia to promote the use of innovative mathematics in industry.
This book collects a number of papers presented at the International Conference on Sensing and Imaging, which was held at Chengdu University of Information Technology on June 5-7, 2017. Sensing and imaging is an interdisciplinary field covering a variety of sciences and techniques such as optics, electricity, magnetism, heat, sound, mathematics, and computing technology. The field has diverse applications of interest such as sensing techniques, imaging, and image processing techniques. This book will appeal to professionals and researchers within the field.
The proceedings includes cutting-edge research articles from the Fourth International Conference on Signal and Image Processing (ICSIP), which is organised by Dr. N.G.P. Institute of Technology, Kalapatti, Coimbatore. The Conference provides academia and industry to discuss and present the latest technological advances and research results in the fields of theoretical, experimental, and application of signal, image and video processing. The book provides latest and most informative content from engineers and scientists in signal, image and video processing from around the world, which will benefit the future research community to work in a more cohesive and collaborative way.
The aim of pattern theory is to create mathematical knowledge representations of complex systems, analyze the mathematical properties of the resulting regular structures, and to apply them to practically occurring patterns in nature and the man-made world. Starting from an algebraic formulation of such representations they are studied in terms of their topological, dynamical and probabilistic aspects. Patterns are expressed through their typical behavior as well as through their variability around their typical form. Employing the representations (regular structures) algorithms are derived for the understanding, recognition, and restoration of observed patterns. The algorithms are investigated through computer experiments. The book is intended for statisticians and mathematicians with an interest in image analysis and pattern theory.
Image processing and machine vision are fields of renewed interest in the commercial market. People in industry, managers, and technical engineers are looking for new technologies to move into the market. Many of the most promising developments are taking place in the field of image processing and its applications. The book offers a broad coverage of advances in a range of topics in image processing and machine vision.
This book provides a concise overview of VR systems and their cybersickness effects, giving a description of possible reasons and existing solutions to reduce or avoid them. Moreover, the book explores the impact that understanding how efficiently our brains are producing a coherent and rich representation of the perceived outside world would have on helping VR technics to be more efficient and friendly to use. Getting Rid of Cybersickness will help readers to understand the underlying technics and social stakes involved, from engineering design to autonomous vehicle motion sickness to video games, with the hope of providing an insight of VR sickness induced by the emerging immersive technologies. This book will therefore be of interest to academics, researchers and designers within the field of VR, as well as industrial users of VR and driving simulators.
Machine vision is the study of how to build intelligent machines which can understand the environment by vision. Among many existing books on this subject, this book is unique in that the entire volume is devoted to computational problems, which most books so not deal with. One of the main subjects of this book is the mathematics underlying all vision problems - projective geometry, in particular. Since projective geometry has been developed by mathematicians without any regard to machine vision applications, our first attempt is to `tune' it into the form applicable to machine vision problems. The resulting formulation is termed computational projective geometry and applied to 3-D shape analysis, camera calibration, road scene analysis, 3-D motion analysis, optical flow analysis, and conic image analysis. A salient characteristic of machine vision problems is that data are not necessarily accurate. Hence, computational procedures defined by using exact relationships may break down if blindly applied to inaccurate data. In this book, special emphasis is put on robustness, which means that the computed result is not only exact when the data are accurate but also is expected to give a good approximation in the prescence of noise. The analysis of how the computation is affected by the inaccuracy of the data is also crucial. Statistical analysis of computations based on image data is also one of the main subjects of this book.
This monograph offers a cross-system exchange and cross-modality investigation into brain-heart interplay. Brain-Heart Interplay (BHI) is a highly interdisciplinary scientific topic, which spreads from the physiology of the Central/Autonomous Nervous Systems, especially Central Autonomic Network, to advanced signal processing and modeling for its activity quantification. Motivated by clinical evidence and supported by recent findings in neurophysiology, this monograph first explores the definition of basic Brain-Heart Interplay quantifiers, and then moves onto advanced methods for the assessment of health and disease states. Non-invasive use of brain monitoring techniques, including electroencephalogram and function Magnetic Resonance Imaging, will be described together with heartbeat dynamics monitoring through pulseoximeter and ECG signals. The audience of this book comprises especially of biomedical engineers and medical doctors with expertise in statistics and/or signal processing. Researchers in the fields of cardiology, neurology, psychiatry, and neuroscience in general may be interested as well.
Energy efficiency is critical for running computer vision on battery-powered systems, such as mobile phones or UAVs (unmanned aerial vehicles, or drones). This book collects the methods that have won the annual IEEE Low-Power Computer Vision Challenges since 2015. The winners share their solutions and provide insight on how to improve the efficiency of machine learning systems.
Change Detection and Image Time Series Analysis 1 presents a wide range of unsupervised methods for temporal evolution analysis through the use of image time series associated with optical and/or synthetic aperture radar acquisition modalities. Chapter 1 introduces two unsupervised approaches to multiple-change detection in bi-temporal multivariate images, with Chapters 2 and 3 addressing change detection in image time series in the context of the statistical analysis of covariance matrices. Chapter 4 focuses on wavelets and convolutional-neural filters for feature extraction and entropy-based anomaly detection, and Chapter 5 deals with a number of metrics such as cross correlation ratios and the Hausdorff distance for variational analysis of the state of snow. Chapter 6 presents a fractional dynamic stochastic field model for spatio temporal forecasting and for monitoring fast-moving meteorological events such as cyclones. Chapter 7 proposes an analysis based on characteristic points for texture modeling, in the context of graph theory, and Chapter 8 focuses on detecting new land cover types by classification-based change detection or feature/pixel based change detection. Chapter 9 focuses on the modeling of classes in the difference image and derives a multiclass model for this difference image in the context of change vector analysis.
Change Detection and Image Time Series Analysis 2 presents supervised machine-learning-based methods for temporal evolution analysis by using image time series associated with Earth observation data. Chapter 1 addresses the fusion of multisensor, multiresolution and multitemporal data. It proposes two supervised solutions that are based on a Markov random field: the first relies on a quad-tree and the second is specifically designed to deal with multimission, multifrequency and multiresolution time series. Chapter 2 provides an overview of pixel based methods for time series classification, from the earliest shallow learning methods to the most recent deep-learning-based approaches. Chapter 3 focuses on very high spatial resolution data time series and on the use of semantic information for modeling spatio-temporal evolution patterns. Chapter 4 centers on the challenges of dense time series analysis, including pre processing aspects and a taxonomy of existing methodologies. Finally, since the evaluation of a learning system can be subject to multiple considerations, Chapters 5 and 6 offer extensive evaluations of the methodologies and learning frameworks used to produce change maps, in the context of multiclass and/or multilabel change classification issues.
This book shows how mathematics, computer science and science can be usefully and seamlessly intertwined. It begins with a general model of cognitive processes in a network of computational nodes, such as neurons, using a variety of tools from mathematics, computational science and neurobiology. It then moves on to solve the diffusion model from a low-level random walk point of view. It also demonstrates how this idea can be used in a new approach to solving the cable equation, in order to better understand the neural computation approximations. It introduces specialized data for emotional content, which allows a brain model to be built using MatLab tools, and also highlights a simple model of cognitive dysfunction.
With strong numerical and computational focus, this book serves as an essential resource on the methods for functional neuroimaging analysis, diffusion weighted image analysis, and longitudinal VBM analysis. It includes four MRI image modalities analysis methods. The first covers the PWI methods, which is the basis for understanding cerebral flow in human brain. The second part, the book's core, covers fMRI methods in three specific domains: first level analysis, second level analysis, and effective connectivity study. The third part covers the analysis of Diffusion weighted image, i.e. DTI, QBI and DSI image analysis. Finally, the book covers (longitudinal) VBM methods and its application to Alzheimer's disease study.
Deep learning algorithms have brought a revolution to the computer vision community by introducing non-traditional and efficient solutions to several image-related problems that had long remained unsolved or partially addressed. This book presents a collection of eleven chapters where each individual chapter explains the deep learning principles of a specific topic, introduces reviews of up-to-date techniques, and presents research findings to the computer vision community. The book covers a broad scope of topics in deep learning concepts and applications such as accelerating the convolutional neural network inference on field-programmable gate arrays, fire detection in surveillance applications, face recognition, action and activity recognition, semantic segmentation for autonomous driving, aerial imagery registration, robot vision, tumor detection, and skin lesion segmentation as well as skin melanoma classification. The content of this book has been organized such that each chapter can be read independently from the others. The book is a valuable companion for researchers, for postgraduate and possibly senior undergraduate students who are taking an advanced course in related topics, and for those who are interested in deep learning with applications in computer vision, image processing, and pattern recognition.
This book presents practical optimization techniques used in image processing and computer vision problems. Ill-posed problems are introduced and used as examples to show how each type of problem is related to typical image processing and computer vision problems. Unconstrained optimization gives the best solution based on numerical minimization of a single, scalar-valued objective function or cost function. Unconstrained optimization problems have been intensively studied, and many algorithms and tools have been developed to solve them. Most practical optimization problems, however, arise with a set of constraints. Typical examples of constraints include: (i) pre-specified pixel intensity range, (ii) smoothness or correlation with neighboring information, (iii) existence on a certain contour of lines or curves, and (iv) given statistical or spectral characteristics of the solution. Regularized optimization is a special method used to solve a class of constrained optimization problems. The term regularization refers to the transformation of an objective function with constraints into a different objective function, automatically reflecting constraints in the unconstrained minimization process. Because of its simplicity and efficiency, regularized optimization has many application areas, such as image restoration, image reconstruction, optical flow estimation, etc. Optimization plays a major role in a wide variety of theories for image processing and computer vision. Various optimization techniques are used at different levels for these problems, and this volume summarizes and explains these techniques as applied to image processing and computer vision.
Complex illumination and meteorological conditions can significantly limit the robustness of robotic vision systems. This book focuses on image pre-processing for robot vision in complex illumination and dynamic weather conditions. It systematically covers cutting-edge models and algorithms, approaching them from a novel viewpoint based on studying the atmospheric physics and imaging mechanism. It provides valuable insights and practical methods such as illumination calculations, scattering modeling, shadow/highlight detection and removal, intrinsic image derivation, and rain/snow/fog removal technologies that will enable robots to be effective in diverse lighting and weather conditions, i.e., ensure their all-weather operating capacity. As such, the book offers a valuable resource for researchers, graduate students and engineers in the fields of robot engineering and computer science.
Many approaches have been proposed to solve the problem of finding the optic flow field of an image sequence. Three major classes of optic flow computation techniques can discriminated (see for a good overview Beauchemin and Barron IBeauchemin19951): gradient based (or differential) methods; phase based (or frequency domain) methods; correlation based (or area) methods; feature point (or sparse data) tracking methods; In this chapter we compute the optic flow as a dense optic flow field with a multi scale differential method. The method, originally proposed by Florack and Nielsen [Florack1998a] is known as the Multiscale Optic Flow Constrain Equation (MOFCE). This is a scale space version of the well known computer vision implementation of the optic flow constraint equation, as originally proposed by Horn and Schunck [Horn1981]. This scale space variation, as usual, consists of the introduction of the aperture of the observation in the process. The application to stereo has been described by Maas et al. [Maas 1995a, Maas 1996a]. Of course, difficulties arise when structure emerges or disappears, such as with occlusion, cloud formation etc. Then knowledge is needed about the processes and objects involved. In this chapter we focus on the scale space approach to the local measurement of optic flow, as we may expect the visual front end to do. 17. 2 Motion detection with pairs of receptive fields As a biologically motivated start, we begin with discussing some neurophysiological findings in the visual system with respect to motion detection.
Access, distribution and processing of Geographic Information (GI) are basic preconditions to support strategic environmental decision-making. The heterogeneity of information on the environment today available is driving a wide number of initiatives, on both sides of the Atlantic, all advocating both the strategic role of proper management and processing of environme- related data as well as the importance of harmonized IT infrastructures designed to better monitor and manage the environment. The extremely wide range of often multidimensional environmental information made available at the global scale poses a great challenge to technologists and scientists to find extremely sophisticated yet effective ways to provide access to relevant data patterns within such a vast and highly dynamic information flow. In the past years the domain of 3D scientific visualization has developed several solutions designed for operators requiring to access results of a simulation through the use of 3D visualization that could support the understanding of an evolving phenomenon. However 3D data visualization alone does not provide model and hypothesis-making neither it provide tools to validate results. In order overcome this shortcoming, in recent years scientists have developed a discipline that combines the benefits of data mining and information visualization, which is often referred to as Visual Analytics (VA).
This book introduces Document As System (DAS), a new GeoComputation pattern, which is also a new GIS application pattern. It uses the GeoComputation language (G language) to describe and execute complex spatial analysis model in the MS Word environment, which solves the bottleneck problem of GIS application, makes GIS become a popular tool for spatial data analysis from the spatial data visualization tool, and plays an important role in the wide application of GIS technology. This book systematically introduces the theory related to the new GeoComputation pattern and the application example in the "dual-evaluation" of territorial and spatial planning, which can be used as a learning and reference manual for GIS related professionals and business personnel engaged in the "dual-evaluation" of territorial and spatial planning.
"Advanced Technologies in Ad Hoc and Sensor Networks "collects selected papers from the 7th China Conference on Wireless Sensor Networks (CWSN2013) held in Qingdao, October 17-19, 2013. The book features state-of-the-art studies on Sensor Networks in China with the theme of Advances in wireless sensor networks of China . The selected works can help promote development of sensor network technology towards interconnectivity, resource sharing, flexibility and high efficiency. Researchers and engineers in the field of sensor networks can benefit from the book. Xue Wang is a professor at Tsinghua University; Li Cui is a professor at Institute of Computing Technology, Chinese Academy of Sciences; Zhongwen Guo is a professor at Ocean University of China."
Information theory has proved to be effective for solving many computer vision and pattern recognition (CVPR) problems (such as image matching, clustering and segmentation, saliency detection, feature selection, optimal classifier design and many others). Nowadays, researchers are widely bringing information theory elements to the CVPR arena. Among these elements there are measures (entropy, mutual information...), principles (maximum entropy, minimax entropy...) and theories (rate distortion theory, method of types...). This book explores and introduces the latter elements through an incremental complexity approach at the same time where CVPR problems are formulated and the most representative algorithms are presented. Interesting connections between information theory principles when applied to different problems are highlighted, seeking a comprehensive research roadmap. The result is a novel tool both for CVPR and machine learning researchers, and contributes to a cross-fertilization of both areas.
Explains the theory behind basic computer vision and provides a bridge from the theory to practical implementation using the industry standard OpenCV libraries Computer Vision is a rapidly expanding area and it is becoming progressively easier for developers to make use of this field due to the ready availability of high quality libraries (such as OpenCV 2). This text is intended to facilitate the practical use of computer vision with the goal being to bridge the gap between the theory and the practical implementation of computer vision. The book will explain how to use the relevant OpenCV library routines and will be accompanied by a full working program including the code snippets from the text. This textbook is a heavily illustrated, practical introduction to an exciting field, the applications of which are becoming almost ubiquitous. We are now surrounded by cameras, for example cameras on computers & tablets/ cameras built into our mobile phones/ cameras in games consoles; cameras imaging difficult modalities (such as ultrasound, X-ray, MRI) in hospitals, and surveillance cameras. This book is concerned with helping the next generation of computer developers to make use of all these images in order to develop systems which are more intuitive and interact with us in more intelligent ways. * Explains the theory behind basic computer vision and provides a bridge from the theory to practical implementation using the industry standard OpenCV libraries * Offers an introduction to computer vision, with enough theory to make clear how the various algorithms work but with an emphasis on practical programming issues * Provides enough material for a one semester course in computer vision at senior undergraduate and Masters levels * Includes the basics of cameras and images and image processing to remove noise, before moving on to topics such as image histogramming; binary imaging; video processing to detect and model moving objects; geometric operations & camera models; edge detection; features detection; recognition in images * Contains a large number of vision application problems to provide students with the opportunity to solve real problems. Images or videos for these problems are provided in the resources associated with this book which include an enhanced eBook
Image processing algorithms based on the mammalian visual cortex are powerful tools for extraction information and manipulating images. This book reviews the neural theory and translates them into digital models. Applications are given in areas of image recognition, foveation, image fusion and information extraction. The third edition reflects renewed international interest in pulse image processing with updated sections presenting several newly developed applications. This edition also introduces a suite of Python scripts that assist readers in replicating results presented in the text and to further develop their own applications.
This book proposes tools for analysis of multidimensional and metric data, by establishing a state-of-the-art of the existing solutions and developing new ones. It mainly focuses on visual exploration of these data by a human analyst, relying on a 2D or 3D scatter plot display obtained through Dimensionality Reduction. Performing diagnosis of an energy system requires identifying relations between observed monitoring variables and the associated internal state of the system. Dimensionality reduction, which allows to represent visually a multidimensional dataset, constitutes a promising tool to help domain experts to analyse these relations. This book reviews existing techniques for visual data exploration and dimensionality reduction such as tSNE and Isomap, and proposes new solutions to challenges in that field. In particular, it presents the new unsupervised technique ASKI and the supervised methods ClassNeRV and ClassJSE. Moreover, MING, a new approach for local map quality evaluation is also introduced. These methods are then applied to the representation of expert-designed fault indicators for smart-buildings, I-V curves for photovoltaic systems and acoustic signals for Li-ion batteries. |
![]() ![]() You may like...
Encyclopedia of Dairy Sciences
Paul L.H. Mcsweeney, John P. McNamara
Hardcover
R54,896
Discovery Miles 548 960
Advances in Industrial Safety - Select…
Faisal I. Khan, Nihal Anwar Siddiqui, …
Hardcover
R4,056
Discovery Miles 40 560
Sustainable Microbial Technologies for…
Jitendra kumar Saini, Surender Singh, …
Hardcover
Advanced CAD Modeling - Explicit…
Nikola Vukasinovic, Joze Duhovnik
Hardcover
R3,353
Discovery Miles 33 530
|