![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Computer vision
One of the most natural representations for modelling spatial objects in computers is discrete representations in the form of a 2D square raster and a 3D cubic grid, since these are naturally obtained by segmenting sensor images. However, the main difficulty is that discrete representations are only approximations of the original objects, and can only be as accurate as the cell size allows. If digitisation is done by real sensor devices, then there is the additional difficulty of sensor distortion. To overcome this, digital shape features must be used that abstract from the inaccuracies of digital representation. In order to ensure the correspondence of continuous and digital features, it is necessary to relate shape features of the underlying continuous objects and to determine the necessary resolution of the digital representation. This volume gives an overview and a classification of the actual approaches to describe the relation between continuous and discrete shape features that are based on digital geometric concepts of discrete structures. Audience: This book will be of interest to researchers and graduate students whose work involves computer vision, image processing, knowledge representation or representation of spatial objects.
Signal Processing in Medicine and Biology: Innovations in Big Data Processing provides an interdisciplinary look at state-of-the-art innovations in biomedical signal processing, especially as it applies to large data sets and machine learning. Chapters are presented with detailed mathematics and complete implementation specifics so that readers can completely master these techniques. The book presents tutorials and examples of successful applications and will appeal to a wide range of professionals, researchers, and students interested in applications of signal processing, medicine, and biology at the intersection between healthcare, engineering, and computer science.
This book presents theories and techniques for perception of textures by computer. Texture is a homogeneous visual pattern that we perceive in surfaces of objects such as textiles, tree barks or stones. Texture analysis is one of the first important steps in computer vision since texture provides important cues to recognize real-world objects. A major part of the book is devoted to two-dimensional analysis of texture patterns by extracting statistical and structural features. It also deals with the shape-from-texture problem which addresses recovery of the three-dimensional surface shapes based on the geometry of projection of the surface texture to the image plane. Perception is still largely mysterious. Realizing a computer vision system that can work in the real world requires more research and ex periment. Capability of textural perception is a key component. We hope this book will contribute to the advancement of computer vision toward robust, useful systems. vVe would like to express our appreciation to Professor Takeo Kanade at Carnegie Mellon University for his encouragement and help in writing this book; to the members of Computer Vision Section at Electrotechni cal Laboratory for providing an excellent research environment; and to Carl W. Harris at Kluwer Academic Publishers for his help in preparing the manuscript."
This book attempts to improve algorithms by novel theories and complex data analysis in different scopes including object detection, remote sensing, data transmission, data fusion, gesture recognition, and edical image processing and analysis. The book is directed to the Ph.D. students, professors, researchers, and software developers working in the areas of digital video processing and computer vision technologies.
Hybrid Computational Intelligence: Challenges and Utilities is a comprehensive resource that begins with the basics and main components of computational intelligence. It brings together many different aspects of the current research on HCI technologies, such as neural networks, support vector machines, fuzzy logic and evolutionary computation, while also covering a wide range of applications and implementation issues, from pattern recognition and system modeling, to intelligent control problems and biomedical applications. The book also explores the most widely used applications of hybrid computation as well as the history of their development. Each individual methodology provides hybrid systems with complementary reasoning and searching methods which allow the use of domain knowledge and empirical data to solve complex problems.
Technological advances have helped to enhance disaster resilience through better risk reduction, response, mitigation, rehabilitation and reconstruction. In former times, it was local and traditional knowledge that was mainly relied upon for disaster risk reduction. Much of this local knowledge is still valid in today's world, even though possibly in different forms and contexts, and local knowledge remains a shared part of life within the communities. In contrast, with the advent of science and technology, scientists and engineers have become owners of advanced technologies, which have contributed significantly to reducing disaster risks across the globe. This book analyses emerging technologies and their effects in enhancing disaster resilience. It also evaluates the gaps, challenges, capacities required and the way forward for future disaster management. A wide variety of technologies are addressed, focusing specifically on new technologies such as cyber physical systems, geotechnology, drone, and virtual reality (VR)/ augmented reality (AR). Other sets of emerging advanced technologies including an early warning system and a decision support system are also reported on. Moreover, the book provides a variety of discussions regarding information management, communication, and community resilience at the time of a disaster. This book's coverage of different aspects of new technologies makes it a valuable resource for students, researchers, academics, policymakers, and development practitioners.
This book provides a comprehensive review of all aspects relating to visual quality assessment for stereoscopic images, including statistical mathematics, stereo vision and deep learning. It covers the fundamentals of stereoscopic image quality assessment (SIQA), the relevant engineering problems and research significance, and also offers an overview of the significant advances in visual quality assessment for stereoscopic images, discussing and analyzing the current state-of-the-art in SIQA algorithms, the latest challenges and research directions as well as novel models and paradigms. In addition, a large number of vivid figures and formulas help readers gain a deeper understanding of the foundation and new applications of objective stereoscopic image quality assessment technologies. Reviewing the latest advances, challenges and trends in stereoscopic image quality assessment, this book is a valuable resource for researchers, engineers and graduate students working in related fields, including imaging, displaying and image processing, especially those interested in SIQA research.
Traditionally, scientific fields have defined boundaries, and scientists work on research problems within those boundaries. However, from time to time those boundaries get shifted or blurred to evolve new fields. For instance, the original goal of computer vision was to understand a single image of a scene, by identifying objects, their structure, and spatial arrangements. This has been referred to as image understanding. Recently, computer vision has gradually been making the transition away from understanding single images to analyzing image sequences, or video understanding. Video understanding deals with understanding of video sequences, e. g., recognition of gestures, activities, facial expressions, etc. The main shift in the classic paradigm has been from the recognition of static objects in the scene to motion-based recognition of actions and events. Video understanding has overlapping research problems with other fields, therefore blurring the fixed boundaries. Computer graphics, image processing, and video databases have obvious overlap with computer vision. The main goal of computer graphics is to gener ate and animate realistic looking images, and videos. Researchers in computer graphics are increasingly employing techniques from computer vision to gen erate the synthetic imagery. A good example of this is image-based rendering and modeling techniques, in which geometry, appearance, and lighting is de rived from real images using computer vision techniques. Here the shift is from synthesis to analysis followed by synthesis."
To enhance the overall viewing experience (for cinema, TV, games, AR/VR) the media industry is continuously striving to improve image quality. Currently the emphasis is on High Dynamic Range (HDR) and Wide Colour Gamut (WCG) technologies, which yield images with greater contrast and more vivid colours. The uptake of these technologies, however, has been hampered by the significant challenge of understanding the science behind visual perception. Vision Models for High Dynamic Range and Wide Colour Gamut Imaging provides university researchers and graduate students in computer science, computer engineering, vision science, as well as industry R&D engineers, an insight into the science and methods for HDR and WCG. It presents the underlying principles and latest practical methods in a detailed and accessible way, highlighting how the use of vision models is a key element of all state-of-the-art methods for these emerging technologies.
Face recognition has been actively studied over the past decade and continues to be a big research challenge. Just recently, researchers have begun to investigate face recognition under unconstrained conditions. Unconstrained Face Recognition provides a comprehensive review of this biometric, especially face recognition from video, assembling a collection of novel approaches that are able to recognize human faces under various unconstrained situations. The underlying basis of these approaches is that, unlike conventional face recognition algorithms, they exploit the inherent characteristics of the unconstrained situation and thus improve the recognition performance when compared with conventional algorithms. Unconstrained Face Recognition is structured to meet the needs of a professional audience of researchers and practitioners in industry. This volume is also suitable for advanced-level students in computer science.
Over the past 15 years, there has been a growing need in the medical image computing community for principled methods to process nonlinear geometric data. Riemannian geometry has emerged as one of the most powerful mathematical and computational frameworks for analyzing such data. Riemannian Geometric Statistics in Medical Image Analysis is a complete reference on statistics on Riemannian manifolds and more general nonlinear spaces with applications in medical image analysis. It provides an introduction to the core methodology followed by a presentation of state-of-the-art methods. Beyond medical image computing, the methods described in this book may also apply to other domains such as signal processing, computer vision, geometric deep learning, and other domains where statistics on geometric features appear. As such, the presented core methodology takes its place in the field of geometric statistics, the statistical analysis of data being elements of nonlinear geometric spaces. The foundational material and the advanced techniques presented in the later parts of the book can be useful in domains outside medical imaging and present important applications of geometric statistics methodology Content includes: The foundations of Riemannian geometric methods for statistics on manifolds with emphasis on concepts rather than on proofs Applications of statistics on manifolds and shape spaces in medical image computing Diffeomorphic deformations and their applications As the methods described apply to domains such as signal processing (radar signal processing and brain computer interaction), computer vision (object and face recognition), and other domains where statistics of geometric features appear, this book is suitable for researchers and graduate students in medical imaging, engineering and computer science.
Presents a strategic perspective and design methodology that guide the process of developing digital products and services that provide 'real experience' to users. Only when the material experienced runs its course to fulfilment is it then regarded as 'real experience' that is distinctively senseful, evaluated as valuable, and harmoniously related to others. Based on the theoretical background of human experience, the book focuses on these three questions: How can we understand the current dominant designs of digital products and services? What are the user experience factors that are critical to provide the real experience? What are the important HCI design elements that can effectively support the various UX factors that are critical to real experience? Design for Experience is intended for people who are interested in the experiences behind the way we use our products and services, for example designers and students interested in interaction, visual graphics and information design or practitioners and entrepreneurs in pursuit of new products or service-based start-ups.
This edited book explores the use of technology to enable us to visualise the life sciences in a more meaningful and engaging way. It will enable those interested in visualisation techniques to gain a better understanding of the applications that can be used in visualisation, imaging and analysis, education, engagement and training. The reader will also be able to learn about the use of visualisation techniques and technologies for the historical and forensic settings. The reader will be able to explore the utilisation of technologies from a number of fields to enable an engaging and meaningful visual representation of the biomedical sciences. The chapters presented in this volume cover such a diverse range of topics, with something for everyone. We present here chapters on technology enhanced learning in neuroanatomy; 3D printing and surgical planning; changes in higher education utilising technology, decolonising the curriculum and visual representations of the human body in education. We also showcase how not to use protective personal equipment inspired by the pandemic; anatomical and historical visualisation of obstetrics and gynaecology; 3D modelling of carpal bones and augmented reality for arachnid phobias for public engagement. In addition, we also present face modelling for surgical education in a multidisciplinary setting, military medical museum 3D digitising of historical pathology specimens and finally computational fluid dynamics.
This book tackles the 6G odyssey, providing a concerted technology roadmap towards the 6G vision focused on the interoperability between the wireless and optical domain, including the benefits that are introduced through virtualization and software defined radio. The authors aim to be at the forefront of beyond 5G technologies by reflecting the integrated works of several major European collaborative projects (H2020-ETN-SECRET, 5GSTEPFWD, and SPOTLIGHT). The book is structured so as to provide insights towards the 6G horizon, reporting on the most recent developments on the international 6G research effort. The authors address a variety of telecom stakeholders, which includes practicing engineers on the field developing commercial solutions for 5G and beyond products; postgraduate researchers that require a basis on which to build their research by highlighting the current challenges on radio, optical and cloud-based networking for ultra-dense networks, including novel approaches; and project managers that could use the principles and applications for shaping new research proposals on this highly dynamic field.
The book presents the proceedings of four conferences: The 24th International Conference on Image Processing, Computer Vision, & Pattern Recognition (IPCV'20), The 6th International Conference on Health Informatics and Medical Systems (HIMS'20), The 21st International Conference on Bioinformatics & Computational Biology (BIOCOMP'20), and The 6th International Conference on Biomedical Engineering and Sciences (BIOENG'20). The conferences took place in Las Vegas, NV, USA, July 27-30, 2020, and are part of the larger 2020 World Congress in Computer Science, Computer Engineering, & Applied Computing (CSCE'20), which features 20 major tracks. Authors include academics, researchers, professionals, and students. Presents the proceedings of four conferences as part of the 2020 World Congress in Computer Science, Computer Engineering, & Applied Computing (CSCE'20); Includes the tracks on Image Processing, Computer Vision, & Pattern Recognition, Health Informatics & Medical Systems, Bioinformatics, Computational Biology & Biomedical Engineering; Features papers from IPCV'20, HIMS'20, BIOCOMP'20, and BIOENG'20.
This volume assesses approaches to the construction of computer vision systems. It shows that there is a spectrum of approaches with different degrees of maturity and robustness. The useful exploitation of computer vision in industry and elsewhere and the development of the discipline itself depend on understanding the way these approaches influence one another. The chief topic discussed is autonomy.True autonomy may not be achievable in machines in the near future, and the workshop concluded that it may be more desirable - and is certainly more pragmatic - to leave a person in the processing loop. The second conclusion of the workshop concerns the manner in which a system is designedfor an application. It was agreed that designers should first specify the required functionality, then identify the knowledge appropriate to that task, and finally choose the appropriate techniques and algorithms. The third conclusion concerns the methodologies employed in developing vision systems: craft, engineering, and science are mutually relevant and contribute to one another. The contributors place heavy emphasis on providing the reader with concrete examples of operational systems. The book is based on a workshop held as part of the activities of an ESPRIT Basic Research Action.
This book includes a selection of peer-reviewed papers presented at the 10th China Academic Conference on Printing and Packaging, which was held in Xi'an, China, on November 14-17, 2019. The conference was jointly organized by the China Academy of Printing Technology, Beijing Institute of Graphic Communication, and Shaanxi University of Science and Technology. With 9 keynote talks and 118 papers on graphic communication and packaging technologies, the conference attracted more than 300 scientists. The proceedings cover the latest findings in a broad range of areas, including color science and technology, image processing technology, digital media technology, mechanical and electronic engineering, Information Engineering and Artificial Intelligence Technology, materials and detection, digital process management technology in printing and packaging, and other technologies. As such, the book appeals to university researchers, R&D engineers and graduate students in the graphic arts, packaging, color science, image science, material science, computer science, digital media, and network technology.
As the first book of a three-part series, this book is offered as a tribute to pioneers in vision, such as Bela Julesz, David Marr, King-Sun Fu, Ulf Grenander, and David Mumford. The authors hope to provide foundation and, perhaps more importantly, further inspiration for continued research in vision. This book covers David Marr's paradigm and various underlying statistical models for vision. The mathematical framework herein integrates three regimes of models (low-, mid-, and high-entropy regimes) and provides foundation for research in visual coding, recognition, and cognition. Concepts are first explained for understanding and then supported by findings in psychology and neuroscience, after which they are established by statistical models and associated learning and inference algorithms. A reader will gain a unified, cross-disciplinary view of research in vision and will accrue knowledge spanning from psychology to neuroscience to statistics.
This book covers current technological innovations and applications in image processing, introducing analysis techniques and describing applications in remote sensing and manufacturing, among others. The authors include new concepts of color space transformation like color interpolation, among others. Also, the concept of Shearlet Transform and Wavelet Transform and their implementation are discussed. The authors include a perspective about concepts and techniques of remote sensing like image mining, geographical, and agricultural resources. The book also includes several applications of human organ biomedical image analysis. In addition, the principle of moving object detection and tracking - including recent trends in moving vehicles and ship detection - is described. Presents developments of current research in various areas of image processing; Includes applications of image processing in remote sensing, astronomy, and manufacturing; Pertains to researchers, academics, students, and practitioners in image processing.
This edited book explores the use of technology to enable us to visualize the life sciences in a more meaningful and engaging way. It will enable those interested in visualization techniques to gain a better understanding of the applications that can be used in visualization, imaging and analysis, education, engagement and training. The reader will also be able to learn about the use of visualization techniques and technologies for the historical and forensic settings. The reader will be able to explore the utilization of technologies from a number of fields to enable an engaging and meaningful visual representation of the biomedical sciences. We have something for a diverse and inclusive audience ranging from healthcare, patient education, animal health and disease and pedagogies around the use of technologies in these related fields. The first four chapters cover healthcare and detail how technology can be used to illustrate emergency surgical access to the airway, pressure sores, robotic surgery in partial nephrectomy, and respiratory viruses. The last six chapters in the education section cover augmented reality and learning neuroanatomy, historical artefacts, virtual reality in canine anatomy, holograms to educate children in cardiothoracic anatomy, 3D models of cetaceans, and the impact of the pandemic on digital anatomical educational resources.
This book explains the theory and application of evolutionary computer vision, a new paradigm where challenging vision problems can be approached using the techniques of evolutionary computing. This methodology achieves excellent results for defining fitness functions and representations for problems by merging evolutionary computation with mathematical optimization to produce automatic creation of emerging visual behaviors. In the first part of the book the author surveys the literature in concise form, defines the relevant terminology, and offers historical and philosophical motivations for the key research problems in the field. For researchers from the computer vision community, he offers a simple introduction to the evolutionary computing paradigm. The second part of the book focuses on implementing evolutionary algorithms that solve given problems using working programs in the major fields of low-, intermediate- and high-level computer vision. This book will be of value to researchers, engineers, and students in the fields of computer vision, evolutionary computing, robotics, biologically inspired mechatronics, electronics engineering, control, and artificial intelligence.
Cooperative and Graph Signal Processing: Principles and Applications presents the fundamentals of signal processing over networks and the latest advances in graph signal processing. A range of key concepts are clearly explained, including learning, adaptation, optimization, control, inference and machine learning. Building on the principles of these areas, the book then shows how they are relevant to understanding distributed communication, networking and sensing and social networks. Finally, the book shows how the principles are applied to a range of applications, such as Big data, Media and video, Smart grids, Internet of Things, Wireless health and Neuroscience. With this book readers will learn the basics of adaptation and learning in networks, the essentials of detection, estimation and filtering, Bayesian inference in networks, optimization and control, machine learning, signal processing on graphs, signal processing for distributed communication, social networks from the perspective of flow of information, and how to apply signal processing methods in distributed settings.
Visualization, Visual Analytics and Virtual Reality in Medicine: State-of-the-art Techniques and Applications describes important techniques and applications that show an understanding of actual user needs as well as technological possibilities. The book includes user research, for example, task and requirement analysis, visualization design and algorithmic ideas without going into the details of implementation. This reference will be suitable for researchers and students in visualization and visual analytics in medicine and healthcare, medical image analysis scientists and biomedical engineers in general. Visualization and visual analytics have become prevalent in public health and clinical medicine, medical flow visualization, multimodal medical visualization and virtual reality in medical education and rehabilitation. Relevant applications now include digital pathology, virtual anatomy and computer-assisted radiation treatment planning.
This updated and revised edition of a classic work provides a summary of methods for numerical computation of high resolution conventional and scanning transmission electron microscope images. At the limits of resolution, image artifacts due to the instrument and the specimen interaction can complicate image interpretation. Image calculations can help the user to interpret and understand high resolution information in recorded electron micrographs. The book contains expanded sections on aberration correction, including a detailed discussion of higher order (multipole) aberrations and their effect on high resolution imaging, new imaging modes such as ABF (annular bright field), and the latest developments in parallel processing using GPUs (graphic processing units), as well as updated references. Beginning and experienced users at the advanced undergraduate or graduate level will find the book to be a unique and essential guide to the theory and methods of computation in electron microscopy. |
You may like...
Computer-Aided Oral and Maxillofacial…
Jan Egger, Xiaojun Chen
Paperback
R4,451
Discovery Miles 44 510
Handbook of Pediatric Brain Imaging…
Hao Huang, Timothy Roberts
Paperback
R3,531
Discovery Miles 35 310
Machine Learning Techniques for Pattern…
Mohit Dua, Ankit Kumar Jain
Hardcover
R7,962
Discovery Miles 79 620
Advanced Methods and Deep Learning in…
E.R. Davies, Matthew Turk
Paperback
R2,578
Discovery Miles 25 780
Handbook of Medical Image Computing and…
S. Kevin Zhou, Daniel Rueckert, …
Hardcover
R4,574
Discovery Miles 45 740
Challenges and Applications for Hand…
Lalit Kane, Bhupesh Kumar Dewangan, …
Hardcover
R5,333
Discovery Miles 53 330
|