![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Image processing > General
This is the first comprehensive treatment of the theoretical aspects of the discrete cosine transform (DCT), which is being recommended by various standards organizations, such as the CCITT, ISO etc., as the primary compression tool in digital image coding. The main purpose of the book is to provide a complete source for the user of this signal processing tool, where both the basics and the applications are detailed. An extensive bibliography covers both the theory and applications of the DCT. The novice will find the book useful in its self-contained treatment of the theory of the DCT, the detailed description of various algorithms supported by computer programs and the range of possible applications, including codecs used for teleconferencing, videophone, progressive image transmission, and broadcast TV. The more advanced user will appreciate the extensive references. Tables describing ASIC VLSI chips for implementing DCT, and motion estimation and details on image compression boards are also provided.
This book presents a first-ever detailed analysis of the complex notation of 2-D and 3-D signals and describes how you can apply it to image processing, modulation, and other fields. It helps you significantly reduce your literature research time, better enables you to simulate signals and communication systems, and helps you to design compatible single-sideband systems.
This comprehensive and timely publication aims to be an essential reference source, building on the available literature in the field of Gamification for the economic and social development of countries while providing further research opportunities in this dynamic and growing field. Thus, the book aims to provide the opportunity for a reflection on this important issue, increasing the understanding of the importance of Gamification in the context of organizations' improvements, providing relevant academic work, empirical research findings and, an overview of this relevant field of study. This text will provide the resources necessary for policymakers, technology developers, and managers to adopt and implement solutions for a more digital era.
This book introduces the challenges of robotic tactile perception and task understanding, and describes an advanced approach based on machine learning and sparse coding techniques. Further, a set of structured sparse coding models is developed to address the issues of dynamic tactile sensing. The book then proves that the proposed framework is effective in solving the problems of multi-finger tactile object recognition, multi-label tactile adjective recognition and multi-category material analysis, which are all challenging practical problems in the fields of robotics and automation. The proposed sparse coding model can be used to tackle the challenging visual-tactile fusion recognition problem, and the book develops a series of efficient optimization algorithms to implement the model. It is suitable as a reference book for graduate students with a basic knowledge of machine learning as well as professional researchers interested in robotic tactile perception and understanding, and machine learning.
This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is investigated for compression of Light Field images based on the HEVC technology. A new linear prediction method using sparse constraints is also described, enabling improved coding performance of the HEVC standard, particularly for images with complex textures based on repeated structures. Finally, the authors present a new, generalized intra-prediction framework for the HEVC standard, which unifies the directional prediction methods used in the current video compression standards, with linear prediction methods using sparse constraints. Experimental results for the compression of natural images are provided, demonstrating the advantage of the unified prediction framework over the traditional directional prediction modes used in HEVC standard.
This book provides comprehensive coverage of the modern methods for geometric problems in the computing sciences. It also covers concurrent topics in data sciences including geometric processing, manifold learning, Google search, cloud data, and R-tree for wireless networks and BigData. The author investigates digital geometry and its related constructive methods in discrete geometry, offering detailed methods and algorithms. The book is divided into five sections: basic geometry; digital curves, surfaces and manifolds; discretely represented objects; geometric computation and processing; and advanced topics. Chapters especially focus on the applications of these methods to other types of geometry, algebraic topology, image processing, computer vision and computer graphics. Digital and Discrete Geometry: Theory and Algorithms targets researchers and professionals working in digital image processing analysis, medical imaging (such as CT and MRI) and informatics, computer graphics, computer vision, biometrics, and information theory. Advanced-level students in electrical engineering, mathematics, and computer science will also find this book useful as a secondary text book or reference. Praise for this book: This book does present a large collection of important concepts, of mathematical, geometrical, or algorithmical nature, that are frequently used in computer graphics and image processing. These concepts range from graphs through manifolds to homology. Of particular value are the sections dealing with discrete versions of classic continuous notions. The reader finds compact definitions and concise explanations that often appeal to intuition, avoiding finer, but then necessarily more complicated, arguments... As a first introduction, or as a reference for professionals working in computer graphics or image processing, this book should be of considerable value." - Prof. Dr. Rolf Klein, University of Bonn.
This book introduces the fundamentals of computer vision (CV), with a focus on extracting useful information from digital images and videos. Including a wealth of methods used in detecting and classifying image objects and their shapes, it is the first book to apply a trio of tools (computational geometry, topology and algorithms) in solving CV problems, shape tracking in image object recognition and detecting the repetition of shapes in single images and video frames. Computational geometry provides a visualization of topological structures such as neighborhoods of points embedded in images, while image topology supplies us with structures useful in the analysis and classification of image regions. Algorithms provide a practical, step-by-step means of viewing image structures. The implementations of CV methods in Matlab and Mathematica, classification of chapter problems with the symbols (easily solved) and (challenging) and its extensive glossary of key words, examples and connections with the fabric of CV make the book an invaluable resource for advanced undergraduate and first year graduate students in Engineering, Computer Science or Applied Mathematics. It offers insights into the design of CV experiments, inclusion of image processing methods in CV projects, as well as the reconstruction and interpretation of recorded natural scenes.
This book provides fresh insights into the cutting edge of multimedia data mining, reflecting how the research focus has shifted towards networked social communities, mobile devices and sensors. The work describes how the history of multimedia data processing can be viewed as a sequence of disruptive innovations. Across the chapters, the discussion covers the practical frameworks, libraries, and open source software that enable the development of ground-breaking research into practical applications. Features: reviews how innovations in mobile, social, cognitive, cloud and organic based computing impacts upon the development of multimedia data mining; provides practical details on implementing the technology for solving real-world problems; includes chapters devoted to privacy issues in multimedia social environments and large-scale biometric data processing; covers content and concept based multimedia search and advanced algorithms for multimedia data representation, processing and visualization.
The book covers the most crucial parts of real-time hyperspectral image processing: causality and real-time capability. Recently, two new concepts of real time hyperspectral image processing, Progressive HyperSpectral Imaging (PHSI) and Recursive HyperSpectral Imaging (RHSI). Both of these can be used to design algorithms and also form an integral part of real time hyperpsectral image processing. This book focuses on progressive nature in algorithms on their real-time and causal processing implementation in two major applications, endmember finding and anomaly detection, both of which are fundamental tasks in hyperspectral imaging but generally not encountered in multispectral imaging. This book is written to particularly address PHSI in real time processing, while a book, Recursive Hyperspectral Sample and Band Processing: Algorithm Architecture and Implementation (Springer 2016) can be considered as its companion book.
Although the capabilities of computer image analysis do not yet match those of the human visual system, recent developments have made great progress towards tackling the challenges posed by the perceptual analysis of images. This unique text/reference highlights a selection of important, practical applications of advanced image analysis methods for medical images. The book covers the complete methodology for processing, analysing and interpreting diagnostic results of sample computed tomography (CT) images. The text also presents significant problems related to new approaches and paradigms in image understanding and semantic image analysis. To further engage the reader, example source code is provided for the implemented algorithms in the described solutions. Topics and features: describes the most important methods and algorithms used for image analysis, including holistic and syntactic methods; examines the fundamentals of cognitive computer image analysis for computer-aided diagnosis and semantic image description, introducing the cognitive resonance model; presents original approaches for the semantic analysis of CT perfusion and CT angiography images of the brain and carotid artery; discusses techniques for creating 3D visualisations of large datasets, and efficient and reliable algorithms for 3D rendering; reviews natural user interfaces in medical imaging systems, covering innovative Gesture Description Language technology; concludes with a summary of significant developments in advanced image recognition techniques and their practical applications, along with possible directions for future research. This cutting-edge work is an invaluable practical resource for researchers and professionals interested in medical informatics, computer-aided diagnosis, computer graphics, and intelligent information systems.
Walks the reader through adaptive approaches to radar signal processing by detailing the basic concepts of various techniques and then developing equations to analyze their performance. Finally, it presents curves that illustrate the attained performance.
1) Learn how to develop computer vision application algorithms 2) Learn to use software tools for analysis and development 3) Learn underlying processes need for image analysis 4) Learn concepts so that the reader can develop their own algorithms 5) Software tools provided
This book provides developers, engineers, researchers and students with detailed knowledge about the High Efficiency Video Coding (HEVC) standard. HEVC is the successor to the widely successful H.264/AVC video compression standard, and it provides around twice as much compression as H.264/AVC for the same level of quality. The applications for HEVC will not only cover the space of the well-known current uses and capabilities of digital video they will also include the deployment of new services and the delivery of enhanced video quality, such as ultra-high-definition television (UHDTV) and video with higher dynamic range, wider range of representable color, and greater representation precision than what is typically found today. HEVC is the next major generation of video coding design a flexible, reliable and robust solution that will support the next decade of video applications and ease the burden of video on world-wide network traffic. This book provides a detailed explanation of the various parts of the standard, insight into how it was developed, and in-depth discussion of algorithms and architectures for its implementation."
A Unique, Cutting-Edge Approach to Optical Filter Design With more and more information being transmitted over fiber-optic lines, optical filtering has become crucial to the advanced functionality of today’s communications networks. Helping researchers and engineers keep pace with this rapidly evolving technology, this book presents digital processing techniques for optical filter design. This higher-level approach focuses on filter characteristics and enables readers to quickly calculate the filter response as well as tackle larger and more complex filters. The authors incorporate numerous theoretical and experimental results from the literature and discuss applications to a variety of systems—including the new wavelength division multiplexing (WDM) technology, which is fast becoming the preferred method for system upgrade and expansion. Special features of this book include:
This highly practical and self-contained guidebook explains the principles and major applications of digital hologram recording and numerical reconstruction (Digital Holography). A special chapter is designated to digital holographic interferometry with applications in deformation and shape measurement and refractive index determination. Applications in imaging and microscopy are also described. Spcial techniques such as digital light-in-flight holography, holographic endoscopy, information encrypting, comparative holography, and related techniques of speckle metrology are also treated
The Soft Computing techniques, which are based on the information processing of biological systems are now massively used in the area of pattern recognition, making prediction & planning, as well as acting on the environment. Ideally speaking, soft computing is not a subject of homogeneous concepts and techniques; rather, it is an amalgamation of distinct methods that confirms to its guiding principle. At present, the main aim of soft computing is to exploit the tolerance for imprecision and uncertainty to achieve tractability, robustness and low solutions cost. The principal constituents of soft computing techniques are probabilistic reasoning, fuzzy logic, neuro-computing, genetic algorithms, belief networks, chaotic systems, as well as learning theory. This book covers contributions from various authors to demonstrate the use of soft computing techniques in various applications of engineering.
Visual content understanding is a complex and important challenge for applications in automatic multimedia information indexing, medicine, robotics, and surveillance. Yet the performance of such systems can be improved by the fusion of individual modalities/techniques for content representation and machine learning. This comprehensive text/reference presents a thorough overview of "Fusion in Computer Vision," from an interdisciplinary and multi-application viewpoint. Presenting contributions from an international selection of experts, the work describes numerous successful approaches, evaluated in the context of international benchmarks that model realistic use cases at significant scales. Topics and features: examines late fusion approaches for concept recognition in images and videos, including the bag-of-words model; describes the interpretation of visual content by incorporating models of the human visual system with content understanding methods; investigates the fusion of multi-modal features of different semantic levels, as well as results of semantic concept detections, for example-based event recognition in video; proposes rotation-based ensemble classifiers for high-dimensional data, which encourage both individual accuracy and diversity within the ensemble; reviews application-focused strategies of fusion in video surveillance, biomedical information retrieval, and content detection in movies; discusses the modeling of mechanisms of human interpretation of complex visual content. This authoritative collection is essential reading for researchers and students interested in the domain of information fusion for complex visual content understanding, and related fields.
Content-based 3-D object retrieval has attracted extensive attention recently and has applications in a variety of fields, such as, computer-aided design, tele-medicine,mobile multimedia, virtual reality, and entertainment. The development of efficient and effective content-based 3-D object retrieval techniques has enabled the use of fast 3-D reconstruction and model design. Recent technical progress, such as the development of camera technologies, has made it possible to capture the views of 3-D objects. As a result, view-based 3-D object retrieval has become an essential but challenging research topic. View-based 3-D Object Retrieval introduces and discusses the fundamental challenges in view-based 3-D object retrieval, proposes a collection of selected state-of-the-art methods for accomplishing this task developed by the authors, and summarizes recent achievements in view-based 3-D object retrieval. Part I presents an Introduction to View-based 3-D Object Retrieval, Part II discusses View Extraction, Selection, and Representation, Part III provides a deep dive into View-Based 3-D Object Comparison, and Part IV looks at future research and developments including Big Data application and geographical location-based applications.
This book discusses computational complexity of High Efficiency Video Coding (HEVC) encoders with coverage extending from the analysis of HEVC compression efficiency and computational complexity to the reduction and scaling of its encoding complexity. After an introduction to the topic and a review of the state-of-the-art research in the field, the authors provide a detailed analysis of the HEVC encoding tools compression efficiency and computational complexity. Readers will benefit from a set of algorithms for scaling the computational complexity of HEVC encoders, all of which take advantage from the flexibility of the frame partitioning structures allowed by the standard. The authors also provide a set of early termination methods based on data mining and machine learning techniques, which are able to reduce the computational complexity required to find the best frame partitioning structures. The applicability of the proposed methods is finally exemplified with an encoding time control system that employs the best complexity reduction and scaling methods presented throughout the book. The methods presented in this book are especially useful in power-constrained, portable multimedia devices to reduce energy consumption and to extend battery life. They can also be applied to portable and non-portable multimedia devices operating in real time with limited computational resources.
This book contains papers presented at the 2014 MICCAI Workshop on Computational Diffusion MRI, CDMRI'14. Detailing new computational methods applied to diffusion magnetic resonance imaging data, it offers readers a snapshot of the current state of the art and covers a wide range of topics from fundamental theoretical work on mathematical modeling to the development and evaluation of robust algorithms and applications in neuroscientific studies and clinical practice. Inside, readers will find information on brain network analysis, mathematical modeling for clinical applications, tissue microstructure imaging, super-resolution methods, signal reconstruction, visualization, and more. Contributions include both careful mathematical derivations and a large number of rich full-color visualizations. Computational techniques are key to the continued success and development of diffusion MRI and to its widespread transfer into the clinic. This volume will offer a valuable starting point for anyone interested in learning computational diffusion MRI. It also offers new perspectives and insights on current research challenges for those currently in the field. The book will be of interest to researchers and practitioners in computer science, MR physics, and applied mathematics.
This illuminating collection offers a fresh look at the very latest advances in the field of embedded computer vision. Emerging areas covered by this comprehensive text/reference include the embedded realization of 3D vision technologies for a variety of applications, such as stereo cameras on mobile devices. Recent trends towards the development of small unmanned aerial vehicles (UAVs) with embedded image and video processing algorithms are also examined. Topics and features: discusses in detail three major success stories - the development of the optical mouse, vision for consumer robotics, and vision for automotive safety; reviews state-of-the-art research on embedded 3D vision, UAVs, automotive vision, mobile vision apps, and augmented reality; examines the potential of embedded computer vision in such cutting-edge areas as the Internet of Things, the mining of large data streams, and in computational sensing; describes historical successes, current implementations, and future challenges.
With an emphasis on applications of computational models for solving modern challenging problems in biomedical and life sciences, this book aims to bring collections of articles from biologists, medical/biomedical and health science researchers together with computational scientists to focus on problems at the frontier of biomedical and life sciences. The goals of this book are to build interactions of scientists across several disciplines and to help industrial users apply advanced computational techniques for solving practical biomedical and life science problems. This book is for users in the fields of biomedical and life sciences who wish to keep abreast with the latest techniques in signal and image analysis. The book presents a detailed description to each of the applications. It can be used by those both at graduate and specialist levels.
In recent years, the paradigm of video coding has shifted from that
of a frame-based approach to a content-based approach, particularly
with the finalization of the ISO multimedia coding standard,
MPEG-4. MPEG-4 is the emerging standard for the coding of
multimedia content. It defines a syntax for a set of content-based
functionalities, namely, content-based interactivity, compression
and universal access. However, it does not specify how the video
content is to be generated. To generate the video content, video
has to be segmented into video objects and tracked as they
transverse across the video frames. This book addresses the
difficult problem of video segmentation, and the extraction and
tracking of video object planes as defined in MPEG-4. It then
focuses on the specific issue of face segmentation and coding as
applied to videoconferencing in order to improve the quality of
videoconferencing images especially in the facial region.
This book introduces Local Binary Patterns (LBP), arguably one of the most powerful texture descriptors, and LBP variants. This volume provides the latest reviews of the literature and a presentation of some of the best LBP variants by researchers at the forefront of textual analysis research and research on LBP descriptors and variants. The value of LBP variants is illustrated with reported experiments using many databases representing a diversity of computer vision applications in medicine, biometrics, and other areas. There is also a chapter that provides an excellent theoretical foundation for texture analysis and LBP in particular. A special section focuses on LBP and LBP variants in the area of face recognition, including thermal face recognition. This book will be of value to anyone already in the field as well as to those interested in learning more about this powerful family of texture descriptors. |
You may like...
Popularizing Science - The Life and Work…
Krishna Dronamraju
Hardcover
R1,131
Discovery Miles 11 310
Improving Interagency Collaboration…
Sarah Hean, Berit Johnsen, …
Hardcover
R1,583
Discovery Miles 15 830
|