![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Artificial intelligence > Computer vision
This textbook provides a progressive approach to the teaching of software engineering. First, readers are introduced to the core concepts of the object-oriented methodology, which is used throughout the book to act as the foundation for software engineering and programming practices, and partly for the software engineering process itself. Then, the processes involved in software engineering are explained in more detail, especially methods and their applications in design, implementation, testing, and measurement, as they relate to software engineering projects. At last, readers are given the chance to practice these concepts by applying commonly used skills and tasks to a hands-on project. The impact of such a format is the potential for quicker and deeper understanding. Readers will master concepts and skills at the most basic levels before continuing to expand on and apply these lessons in later chapters.
One of the most natural representations for modelling spatial objects in computers is discrete representations in the form of a 2D square raster and a 3D cubic grid, since these are naturally obtained by segmenting sensor images. However, the main difficulty is that discrete representations are only approximations of the original objects, and can only be as accurate as the cell size allows. If digitisation is done by real sensor devices, then there is the additional difficulty of sensor distortion. To overcome this, digital shape features must be used that abstract from the inaccuracies of digital representation. In order to ensure the correspondence of continuous and digital features, it is necessary to relate shape features of the underlying continuous objects and to determine the necessary resolution of the digital representation. This volume gives an overview and a classification of the actual approaches to describe the relation between continuous and discrete shape features that are based on digital geometric concepts of discrete structures. Audience: This book will be of interest to researchers and graduate students whose work involves computer vision, image processing, knowledge representation or representation of spatial objects.
Hybrid Computational Intelligence: Challenges and Utilities is a comprehensive resource that begins with the basics and main components of computational intelligence. It brings together many different aspects of the current research on HCI technologies, such as neural networks, support vector machines, fuzzy logic and evolutionary computation, while also covering a wide range of applications and implementation issues, from pattern recognition and system modeling, to intelligent control problems and biomedical applications. The book also explores the most widely used applications of hybrid computation as well as the history of their development. Each individual methodology provides hybrid systems with complementary reasoning and searching methods which allow the use of domain knowledge and empirical data to solve complex problems.
This book presents theories and techniques for perception of textures by computer. Texture is a homogeneous visual pattern that we perceive in surfaces of objects such as textiles, tree barks or stones. Texture analysis is one of the first important steps in computer vision since texture provides important cues to recognize real-world objects. A major part of the book is devoted to two-dimensional analysis of texture patterns by extracting statistical and structural features. It also deals with the shape-from-texture problem which addresses recovery of the three-dimensional surface shapes based on the geometry of projection of the surface texture to the image plane. Perception is still largely mysterious. Realizing a computer vision system that can work in the real world requires more research and ex periment. Capability of textural perception is a key component. We hope this book will contribute to the advancement of computer vision toward robust, useful systems. vVe would like to express our appreciation to Professor Takeo Kanade at Carnegie Mellon University for his encouragement and help in writing this book; to the members of Computer Vision Section at Electrotechni cal Laboratory for providing an excellent research environment; and to Carl W. Harris at Kluwer Academic Publishers for his help in preparing the manuscript."
This book tackles the 6G odyssey, providing a concerted technology roadmap towards the 6G vision focused on the interoperability between the wireless and optical domain, including the benefits that are introduced through virtualization and software defined radio. The authors aim to be at the forefront of beyond 5G technologies by reflecting the integrated works of several major European collaborative projects (H2020-ETN-SECRET, 5GSTEPFWD, and SPOTLIGHT). The book is structured so as to provide insights towards the 6G horizon, reporting on the most recent developments on the international 6G research effort. The authors address a variety of telecom stakeholders, which includes practicing engineers on the field developing commercial solutions for 5G and beyond products; postgraduate researchers that require a basis on which to build their research by highlighting the current challenges on radio, optical and cloud-based networking for ultra-dense networks, including novel approaches; and project managers that could use the principles and applications for shaping new research proposals on this highly dynamic field.
This edited book explores the use of technology to enable us to visualize the life sciences in a more meaningful and engaging way. It will enable those interested in visualization techniques to gain a better understanding of the applications that can be used in visualization, imaging and analysis, education, engagement and training. The reader will also be able to learn about the use of visualization techniques and technologies for the historical and forensic settings. The reader will be able to explore the utilization of technologies from a number of fields to enable an engaging and meaningful visual representation of the biomedical sciences. We have something for a diverse and inclusive audience ranging from healthcare, patient education, animal health and disease and pedagogies around the use of technologies in these related fields. The first four chapters cover healthcare and detail how technology can be used to illustrate emergency surgical access to the airway, pressure sores, robotic surgery in partial nephrectomy, and respiratory viruses. The last six chapters in the education section cover augmented reality and learning neuroanatomy, historical artefacts, virtual reality in canine anatomy, holograms to educate children in cardiothoracic anatomy, 3D models of cetaceans, and the impact of the pandemic on digital anatomical educational resources.
To enhance the overall viewing experience (for cinema, TV, games, AR/VR) the media industry is continuously striving to improve image quality. Currently the emphasis is on High Dynamic Range (HDR) and Wide Colour Gamut (WCG) technologies, which yield images with greater contrast and more vivid colours. The uptake of these technologies, however, has been hampered by the significant challenge of understanding the science behind visual perception. Vision Models for High Dynamic Range and Wide Colour Gamut Imaging provides university researchers and graduate students in computer science, computer engineering, vision science, as well as industry R&D engineers, an insight into the science and methods for HDR and WCG. It presents the underlying principles and latest practical methods in a detailed and accessible way, highlighting how the use of vision models is a key element of all state-of-the-art methods for these emerging technologies.
This book covers current technological innovations and applications in image processing, introducing analysis techniques and describing applications in remote sensing and manufacturing, among others. The authors include new concepts of color space transformation like color interpolation, among others. Also, the concept of Shearlet Transform and Wavelet Transform and their implementation are discussed. The authors include a perspective about concepts and techniques of remote sensing like image mining, geographical, and agricultural resources. The book also includes several applications of human organ biomedical image analysis. In addition, the principle of moving object detection and tracking - including recent trends in moving vehicles and ship detection - is described. Presents developments of current research in various areas of image processing; Includes applications of image processing in remote sensing, astronomy, and manufacturing; Pertains to researchers, academics, students, and practitioners in image processing.
Over the past 15 years, there has been a growing need in the medical image computing community for principled methods to process nonlinear geometric data. Riemannian geometry has emerged as one of the most powerful mathematical and computational frameworks for analyzing such data. Riemannian Geometric Statistics in Medical Image Analysis is a complete reference on statistics on Riemannian manifolds and more general nonlinear spaces with applications in medical image analysis. It provides an introduction to the core methodology followed by a presentation of state-of-the-art methods. Beyond medical image computing, the methods described in this book may also apply to other domains such as signal processing, computer vision, geometric deep learning, and other domains where statistics on geometric features appear. As such, the presented core methodology takes its place in the field of geometric statistics, the statistical analysis of data being elements of nonlinear geometric spaces. The foundational material and the advanced techniques presented in the later parts of the book can be useful in domains outside medical imaging and present important applications of geometric statistics methodology Content includes: The foundations of Riemannian geometric methods for statistics on manifolds with emphasis on concepts rather than on proofs Applications of statistics on manifolds and shape spaces in medical image computing Diffeomorphic deformations and their applications As the methods described apply to domains such as signal processing (radar signal processing and brain computer interaction), computer vision (object and face recognition), and other domains where statistics of geometric features appear, this book is suitable for researchers and graduate students in medical imaging, engineering and computer science.
Traditionally, scientific fields have defined boundaries, and scientists work on research problems within those boundaries. However, from time to time those boundaries get shifted or blurred to evolve new fields. For instance, the original goal of computer vision was to understand a single image of a scene, by identifying objects, their structure, and spatial arrangements. This has been referred to as image understanding. Recently, computer vision has gradually been making the transition away from understanding single images to analyzing image sequences, or video understanding. Video understanding deals with understanding of video sequences, e. g., recognition of gestures, activities, facial expressions, etc. The main shift in the classic paradigm has been from the recognition of static objects in the scene to motion-based recognition of actions and events. Video understanding has overlapping research problems with other fields, therefore blurring the fixed boundaries. Computer graphics, image processing, and video databases have obvious overlap with computer vision. The main goal of computer graphics is to gener ate and animate realistic looking images, and videos. Researchers in computer graphics are increasingly employing techniques from computer vision to gen erate the synthetic imagery. A good example of this is image-based rendering and modeling techniques, in which geometry, appearance, and lighting is de rived from real images using computer vision techniques. Here the shift is from synthesis to analysis followed by synthesis."
This book presents original research into language teacher education (LTE) activities in digital spaces, making use of a multimodal Conversation Analysis (CA) approach to examine multiple datasets and bring new insights into the theory, research, and practice of second/foreign language teacher education. The author conceptualizes a model of Conversation Analytic Language Teacher Education (CALTE), proposing a new knowledge base for LTE, identifying research-informed defining features, mapping the scope of an original praxis base, and providing research evidence from the implementation of this approach in and for digital spaces. The result is an argument for wide implementation and on-going improvement of the CALTE approach, and the book will be of interest to language teacher education professionals, multimodal CA researchers, and applied linguists.
Presents a strategic perspective and design methodology that guide the process of developing digital products and services that provide 'real experience' to users. Only when the material experienced runs its course to fulfilment is it then regarded as 'real experience' that is distinctively senseful, evaluated as valuable, and harmoniously related to others. Based on the theoretical background of human experience, the book focuses on these three questions: How can we understand the current dominant designs of digital products and services? What are the user experience factors that are critical to provide the real experience? What are the important HCI design elements that can effectively support the various UX factors that are critical to real experience? Design for Experience is intended for people who are interested in the experiences behind the way we use our products and services, for example designers and students interested in interaction, visual graphics and information design or practitioners and entrepreneurs in pursuit of new products or service-based start-ups.
Face recognition has been actively studied over the past decade and continues to be a big research challenge. Just recently, researchers have begun to investigate face recognition under unconstrained conditions. Unconstrained Face Recognition provides a comprehensive review of this biometric, especially face recognition from video, assembling a collection of novel approaches that are able to recognize human faces under various unconstrained situations. The underlying basis of these approaches is that, unlike conventional face recognition algorithms, they exploit the inherent characteristics of the unconstrained situation and thus improve the recognition performance when compared with conventional algorithms. Unconstrained Face Recognition is structured to meet the needs of a professional audience of researchers and practitioners in industry. This volume is also suitable for advanced-level students in computer science.
This volume assesses approaches to the construction of computer vision systems. It shows that there is a spectrum of approaches with different degrees of maturity and robustness. The useful exploitation of computer vision in industry and elsewhere and the development of the discipline itself depend on understanding the way these approaches influence one another. The chief topic discussed is autonomy.True autonomy may not be achievable in machines in the near future, and the workshop concluded that it may be more desirable - and is certainly more pragmatic - to leave a person in the processing loop. The second conclusion of the workshop concerns the manner in which a system is designedfor an application. It was agreed that designers should first specify the required functionality, then identify the knowledge appropriate to that task, and finally choose the appropriate techniques and algorithms. The third conclusion concerns the methodologies employed in developing vision systems: craft, engineering, and science are mutually relevant and contribute to one another. The contributors place heavy emphasis on providing the reader with concrete examples of operational systems. The book is based on a workshop held as part of the activities of an ESPRIT Basic Research Action.
This book constitutes the refereed proceedings of the 6th International Conference on Computer, Communication, and Signal Processing, ICCSP 2022, held in Chennai, India, in February 2022.* The 21 full and 2 short papers presented in this volume were carefully reviewed and selected from 111 submissions. The papers are categorized into topical sub-headings: artificial intelligence and machine learning; Cyber security; and internet of things. *The conference was held as a virtual event due to the COVID-19 pandemic.
This updated and revised edition of a classic work provides a summary of methods for numerical computation of high resolution conventional and scanning transmission electron microscope images. At the limits of resolution, image artifacts due to the instrument and the specimen interaction can complicate image interpretation. Image calculations can help the user to interpret and understand high resolution information in recorded electron micrographs. The book contains expanded sections on aberration correction, including a detailed discussion of higher order (multipole) aberrations and their effect on high resolution imaging, new imaging modes such as ABF (annular bright field), and the latest developments in parallel processing using GPUs (graphic processing units), as well as updated references. Beginning and experienced users at the advanced undergraduate or graduate level will find the book to be a unique and essential guide to the theory and methods of computation in electron microscopy.
This book explains the theory and application of evolutionary computer vision, a new paradigm where challenging vision problems can be approached using the techniques of evolutionary computing. This methodology achieves excellent results for defining fitness functions and representations for problems by merging evolutionary computation with mathematical optimization to produce automatic creation of emerging visual behaviors. In the first part of the book the author surveys the literature in concise form, defines the relevant terminology, and offers historical and philosophical motivations for the key research problems in the field. For researchers from the computer vision community, he offers a simple introduction to the evolutionary computing paradigm. The second part of the book focuses on implementing evolutionary algorithms that solve given problems using working programs in the major fields of low-, intermediate- and high-level computer vision. This book will be of value to researchers, engineers, and students in the fields of computer vision, evolutionary computing, robotics, biologically inspired mechatronics, electronics engineering, control, and artificial intelligence.
Connectomics: Applications to Neuroimaging is unique in presenting the frontier of neuro-applications using brain connectomics techniques. The book describes state-of-the-art research that applies brain connectivity analysis techniques to a broad range of neurological and psychiatric disorders (Alzheimer's, epilepsy, stroke, autism, Parkinson's, drug or alcohol addiction, depression, bipolar, and schizophrenia), brain fingerprint applications, speech-language assessments, and cognitive assessment. With this book the reader will learn: Basic mathematical principles underlying connectomics How connectomics is applied to a wide range of neuro-applications What is the future direction of connectomics techniques. This book is an ideal reference for researchers and graduate students in computer science, data science, computational neuroscience, computational physics, or mathematics who need to understand how computational models derived from brain connectivity data are being used in clinical applications, as well as neuroscientists and medical researchers wanting an overview of the technical methods. Features: Combines connectomics methods with relevant and interesting neuro-applications Covers most of the hot topics in neuroscience and clinical areas Appeals to researchers in a wide range of disciplines: computer science, engineering, data science, mathematics, computational physics, computational neuroscience, as well as neuroscience, and medical researchers interested in the technical methods of connectomics
Face analysis is essential for a large number of applications such as human-computer interaction or multimedia (e.g. content indexing and retrieval). Although many approaches are under investigation, performance under uncontrolled conditions is still not satisfactory. The variations that impact facial appearance (e.g. pose, expression, illumination, occlusion, motion blur) make it a difficult problem to solve. This book describes the progress towards this goal, from a core building block - landmark detection - to the higher level of micro and macro expression recognition. Specifically, the book addresses the modeling of temporal information to coincide with the dynamic nature of the face. It also includes a benchmark of recent solutions along with details about the acquisition of a dataset for such tasks.
Cooperative and Graph Signal Processing: Principles and Applications presents the fundamentals of signal processing over networks and the latest advances in graph signal processing. A range of key concepts are clearly explained, including learning, adaptation, optimization, control, inference and machine learning. Building on the principles of these areas, the book then shows how they are relevant to understanding distributed communication, networking and sensing and social networks. Finally, the book shows how the principles are applied to a range of applications, such as Big data, Media and video, Smart grids, Internet of Things, Wireless health and Neuroscience. With this book readers will learn the basics of adaptation and learning in networks, the essentials of detection, estimation and filtering, Bayesian inference in networks, optimization and control, machine learning, signal processing on graphs, signal processing for distributed communication, social networks from the perspective of flow of information, and how to apply signal processing methods in distributed settings.
This book presents a comprehensive study of different tools and techniques available to perform network forensics. Also, various aspects of network forensics are reviewed as well as related technologies and their limitations. This helps security practitioners and researchers in better understanding of the problem, current solution space, and future research scope to detect and investigate various network intrusions against such attacks efficiently. Forensic computing is rapidly gaining importance since the amount of crime involving digital systems is steadily increasing. Furthermore, the area is still underdeveloped and poses many technical and legal challenges. The rapid development of the Internet over the past decade appeared to have facilitated an increase in the incidents of online attacks. There are many reasons which are motivating the attackers to be fearless in carrying out the attacks. For example, the speed with which an attack can be carried out, the anonymity provided by the medium, nature of medium where digital information is stolen without actually removing it, increased availability of potential victims and the global impact of the attacks are some of the aspects. Forensic analysis is performed at two different levels: Computer Forensics and Network Forensics. Computer forensics deals with the collection and analysis of data from computer systems, networks, communication streams and storage media in a manner admissible in a court of law. Network forensics deals with the capture, recording or analysis of network events in order to discover evidential information about the source of security attacks in a court of law. Network forensics is not another term for network security. It is an extended phase of network security as the data for forensic analysis are collected from security products like firewalls and intrusion detection systems. The results of this data analysis are utilized for investigating the attacks. Network forensics generally refers to the collection and analysis of network data such as network traffic, firewall logs, IDS logs, etc. Technically, it is a member of the already-existing and expanding the field of digital forensics. Analogously, network forensics is defined as "The use of scientifically proved techniques to collect, fuses, identifies, examine, correlate, analyze, and document digital evidence from multiple, actively processing and transmitting digital sources for the purpose of uncovering facts related to the planned intent, or measured success of unauthorized activities meant to disrupt, corrupt, and or compromise system components as well as providing information to assist in response to or recovery from these activities." Network forensics plays a significant role in the security of today's organizations. On the one hand, it helps to learn the details of external attacks ensuring similar future attacks are thwarted. Additionally, network forensics is essential for investigating insiders' abuses that constitute the second costliest type of attack within organizations. Finally, law enforcement requires network forensics for crimes in which a computer or digital system is either being the target of a crime or being used as a tool in carrying a crime. Network security protects the system against attack while network forensics focuses on recording evidence of the attack. Network security products are generalized and look for possible harmful behaviors. This monitoring is a continuous process and is performed all through the day. However, network forensics involves post mortem investigation of the attack and is initiated after crime notification. There are many tools which assist in capturing data transferred over the networks so that an attack or the malicious intent of the intrusions may be investigated. Similarly, various network forensic frameworks are proposed in the literature.
This book offers a systematic and comprehensive introduction to the visual simultaneous localization and mapping (vSLAM) technology, which is a fundamental and essential component for many applications in robotics, wearable devices, and autonomous driving vehicles. The book starts from very basic mathematic background knowledge such as 3D rigid body geometry, the pinhole camera projection model, and nonlinear optimization techniques, before introducing readers to traditional computer vision topics like feature matching, optical flow, and bundle adjustment. The book employs a light writing style, instead of the rigorous yet dry approach that is common in academic literature. In addition, it includes a wealth of executable source code with increasing difficulty to help readers understand and use the practical techniques. The book can be used as a textbook for senior undergraduate or graduate students, or as reference material for researchers and engineers in related areas.
Computer Vision for Assistive Healthcare describes how advanced computer vision techniques provide tools to support common human needs, such as mental functioning, personal mobility, sensory functions, daily living activities, image processing, pattern recognition, machine learning and how language processing and computer graphics cooperate with robotics to provide such tools. Users will learn about the emerging computer vision techniques for supporting mental functioning, algorithms for analyzing human behavior, and how smart interfaces and virtual reality tools lead to the development of advanced rehabilitation systems able to perform human action and activity recognition. In addition, the book covers the technology behind intelligent wheelchairs, how computer vision technologies have the potential to assist blind people, and about the computer vision-based solutions recently employed for safety and health monitoring.
Despite the fact that images constitute the main objects in computer vision and image analysis, there is remarkably little concern about their actual definition. In this book a complete account of image structure is proposed in terms of rigorously defined machine concepts, using basic tools from algebra, analysis, and differential geometry. Machine technicalities such as discretisation and quantisation details are de-emphasised, and robustness with respect to noise is manifest. From the foreword by Jan Koenderink: It is my hope that the book will find a wide audience, including physicists - who still are largely unaware of the general importance and power of scale space theory, mathematicians - who will find in it a principled and formally tight exposition of a topic awaiting further development, and computer scientists - who will find here a unified and conceptually well founded framework for many apparently unrelated and largely historically motivated methods they already know and love. The book is suited for self-study and graduate courses, the carefully formulated exercises are designed to get to grips with the subject matter and prepare the reader for original research.' |
![]() ![]() You may like...
Asymptotic Expansion of a Partition…
Gaetan Borot, Alice Guionnet, …
Hardcover
R2,001
Discovery Miles 20 010
Analog Interfaces for Digital Signal…
Frank Op't Eynde, Willy M.C. Sansen
Hardcover
R4,499
Discovery Miles 44 990
Functional and High-Dimensional…
German Aneiros, Ivana Horova, …
Hardcover
R4,373
Discovery Miles 43 730
Robustness and Complex Data Structures…
Claudia Becker, Roland Fried, …
Hardcover
Bayesian Networks and Influence…
Uffe B. Kjaerulff, Anders L. Madsen
Hardcover
|