![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Image processing
Photographic imagery has come a long way from the pinhole cameras of the nineteenth century. Digital imagery, and its applications, develops in tandem with contemporary society's sophisticated literacy of this subtle medium. This book examines the ways in which digital images have become ever more ubiquitous as legal and medical evidence, just as they have become our primary source of news and have replaced paper-based financial documentation. Crucially, the contributions also analyze the very profound problems which have arisen alongside the digital image, issues of veracity and progeny that demand systematic and detailed response: It looks real, but is it? What camera captured it? Has it been doctored or subtly altered? Attempting to provide answers to these slippery issues, the book covers how digital images are created, processed and stored before moving on to set out the latest techniques for forensically examining images, and finally addressing practical issues such as courtroom admissibility. In an environment where even novice users can alter digital media, this authoritative publication will do much so stabilize public trust in these real, yet vastly flexible, images of the world around us.
This book provides a deep analysis and wide coverage of the very strong trend in computer vision and visual indexing and retrieval, covering such topics as incorporation of models of Human Visual attention into analysis and retrieval tasks. It makes the bridge between psycho-visual modelling of Human Visual System and the classical and most recent models in visual content indexing and retrieval. The large spectrum of visual tasks, such as recognition of textures in static images, of actions in video content, image retrieval, different methods of visualization of images and multimedia content based on visual saliency are presented by the authors. Furthermore, the interest in visual content is modelled with the means of the latest classification models such as Deep Neural Networks is also covered in this book. This book is an exceptional resource as a secondary text for researchers and advanced level students, who are involved in the very wide research in computer vision, visual information indexing and retrieval. Professionals working in this field will also be interested in this book as a reference.
Semantic-based visual information retrieval is one of the most challenging research directions of content-based visual information retrieval. It provides efficient tools for access, interaction, searching, and retrieving from collected databases of visual media. Building on research from over 30 leading experts from around the world, ""Semantic-Based Visual Information Retrieval"" presents state-of-the-art advancements and developments in the field, and also brings a selection of techniques and algorithms about semantic-based visual information retrieval. It covers many critical issues, such as: multi-level representation and description, scene understanding, semantic modeling, image and video annotation, humancomputer interaction, and more. ""Semantic-Based Visual Information Retrieval"" also explains detailed solutions to a wide range of practical applications. Researchers, students, and practitioners will find this comprehensive and detailed volume to be a roadmap for applying suitable methods in semantic-based visual information retrieval.
This book is devoted to one of the most famous examples of automation handling tasks - the "bin-picking" problem. To pick up objects, scrambled in a box is an easy task for humans, but its automation is very complex. In this book three different approaches to solve the bin-picking problem are described, showing how modern sensors can be used for efficient bin-picking as well as how classic sensor concepts can be applied for novel bin-picking techniques. 3D point clouds are firstly used as basis, employing the known Random Sample Matching algorithm paired with a very efficient depth map based collision avoidance mechanism resulting in a very robust bin-picking approach. Reducing the complexity of the sensor data, all computations are then done on depth maps. This allows the use of 2D image analysis techniques to fulfill the tasks and results in real time data analysis. Combined with force/torque and acceleration sensors, a near time optimal bin-picking system emerges. Lastly, surface normal maps are employed as a basis for pose estimation. In contrast to known approaches, the normal maps are not used for 3D data computation but directly for the object localization problem, enabling the application of a new class of sensors for bin-picking.
This book presents covert, semi-covert and overt techniques for communication over printed media by modifying images, texts or barcodes within the document. Basic and advanced techniques are discussed aimed to modulate information into images, texts and barcodes. Conveying information over printed media can be useful for content authentication, author copyright, information and piracy product deterrent, side information for marketing, among other applications. Practical issues are discussed and experiments are provided to evaluate competitive approaches for hard-copy communication. This book is a useful resource for researchers, practitioners and graduate students in the field of hard-copy communication by providing the fundamentals, basic and advanced techniques as examples of approaches to address the hard-copy media distortions and particularities.
This volume presents the peer-reviewed proceedings of the international conference Imaging, Vision and Learning Based on Optimization and PDEs (IVLOPDE), held in Bergen, Norway, in August/September 2016. The contributions cover state-of-the-art research on mathematical techniques for image processing, computer vision and machine learning based on optimization and partial differential equations (PDEs). It has become an established paradigm to formulate problems within image processing and computer vision as PDEs, variational problems or finite dimensional optimization problems. This compact yet expressive framework makes it possible to incorporate a range of desired properties of the solutions and to design algorithms based on well-founded mathematical theory. A growing body of research has also approached more general problems within data analysis and machine learning from the same perspective, and demonstrated the advantages over earlier, more established algorithms. This volume will appeal to all mathematicians and computer scientists interested in novel techniques and analytical results for optimization, variational models and PDEs, together with experimental results on applications ranging from early image formation to high-level image and data analysis.
This reference provides information on electronic intelligence (ELINT) analysis techniques, with coverage of their applications, strengths and limitations. Now refined and updated, this second edition presents new concepts and techniques. The book is intended for newcomers to the field as well as engineers interested in signal analysis, ELINT analysts, and the designers, programmers and operators of radar, ECM, ECCM and ESM systems.
This book is about computational methods based on operator splitting. It consists of twenty-three chapters written by recognized splitting method contributors and practitioners, and covers a vast spectrum of topics and application areas, including computational mechanics, computational physics, image processing, wireless communication, nonlinear optics, and finance. Therefore, the book presents very versatile aspects of splitting methods and their applications, motivating the cross-fertilization of ideas.
This is the first comprehensive treatment of the theoretical aspects of the discrete cosine transform (DCT), which is being recommended by various standards organizations, such as the CCITT, ISO etc., as the primary compression tool in digital image coding. The main purpose of the book is to provide a complete source for the user of this signal processing tool, where both the basics and the applications are detailed. An extensive bibliography covers both the theory and applications of the DCT. The novice will find the book useful in its self-contained treatment of the theory of the DCT, the detailed description of various algorithms supported by computer programs and the range of possible applications, including codecs used for teleconferencing, videophone, progressive image transmission, and broadcast TV. The more advanced user will appreciate the extensive references. Tables describing ASIC VLSI chips for implementing DCT, and motion estimation and details on image compression boards are also provided.
This book presents a first-ever detailed analysis of the complex notation of 2-D and 3-D signals and describes how you can apply it to image processing, modulation, and other fields. It helps you significantly reduce your literature research time, better enables you to simulate signals and communication systems, and helps you to design compatible single-sideband systems.
This comprehensive and timely publication aims to be an essential reference source, building on the available literature in the field of Gamification for the economic and social development of countries while providing further research opportunities in this dynamic and growing field. Thus, the book aims to provide the opportunity for a reflection on this important issue, increasing the understanding of the importance of Gamification in the context of organizations' improvements, providing relevant academic work, empirical research findings and, an overview of this relevant field of study. This text will provide the resources necessary for policymakers, technology developers, and managers to adopt and implement solutions for a more digital era.
This book introduces the challenges of robotic tactile perception and task understanding, and describes an advanced approach based on machine learning and sparse coding techniques. Further, a set of structured sparse coding models is developed to address the issues of dynamic tactile sensing. The book then proves that the proposed framework is effective in solving the problems of multi-finger tactile object recognition, multi-label tactile adjective recognition and multi-category material analysis, which are all challenging practical problems in the fields of robotics and automation. The proposed sparse coding model can be used to tackle the challenging visual-tactile fusion recognition problem, and the book develops a series of efficient optimization algorithms to implement the model. It is suitable as a reference book for graduate students with a basic knowledge of machine learning as well as professional researchers interested in robotic tactile perception and understanding, and machine learning.
This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is investigated for compression of Light Field images based on the HEVC technology. A new linear prediction method using sparse constraints is also described, enabling improved coding performance of the HEVC standard, particularly for images with complex textures based on repeated structures. Finally, the authors present a new, generalized intra-prediction framework for the HEVC standard, which unifies the directional prediction methods used in the current video compression standards, with linear prediction methods using sparse constraints. Experimental results for the compression of natural images are provided, demonstrating the advantage of the unified prediction framework over the traditional directional prediction modes used in HEVC standard.
This book provides comprehensive coverage of the modern methods for geometric problems in the computing sciences. It also covers concurrent topics in data sciences including geometric processing, manifold learning, Google search, cloud data, and R-tree for wireless networks and BigData. The author investigates digital geometry and its related constructive methods in discrete geometry, offering detailed methods and algorithms. The book is divided into five sections: basic geometry; digital curves, surfaces and manifolds; discretely represented objects; geometric computation and processing; and advanced topics. Chapters especially focus on the applications of these methods to other types of geometry, algebraic topology, image processing, computer vision and computer graphics. Digital and Discrete Geometry: Theory and Algorithms targets researchers and professionals working in digital image processing analysis, medical imaging (such as CT and MRI) and informatics, computer graphics, computer vision, biometrics, and information theory. Advanced-level students in electrical engineering, mathematics, and computer science will also find this book useful as a secondary text book or reference. Praise for this book: This book does present a large collection of important concepts, of mathematical, geometrical, or algorithmical nature, that are frequently used in computer graphics and image processing. These concepts range from graphs through manifolds to homology. Of particular value are the sections dealing with discrete versions of classic continuous notions. The reader finds compact definitions and concise explanations that often appeal to intuition, avoiding finer, but then necessarily more complicated, arguments... As a first introduction, or as a reference for professionals working in computer graphics or image processing, this book should be of considerable value." - Prof. Dr. Rolf Klein, University of Bonn.
This book introduces the fundamentals of computer vision (CV), with a focus on extracting useful information from digital images and videos. Including a wealth of methods used in detecting and classifying image objects and their shapes, it is the first book to apply a trio of tools (computational geometry, topology and algorithms) in solving CV problems, shape tracking in image object recognition and detecting the repetition of shapes in single images and video frames. Computational geometry provides a visualization of topological structures such as neighborhoods of points embedded in images, while image topology supplies us with structures useful in the analysis and classification of image regions. Algorithms provide a practical, step-by-step means of viewing image structures. The implementations of CV methods in Matlab and Mathematica, classification of chapter problems with the symbols (easily solved) and (challenging) and its extensive glossary of key words, examples and connections with the fabric of CV make the book an invaluable resource for advanced undergraduate and first year graduate students in Engineering, Computer Science or Applied Mathematics. It offers insights into the design of CV experiments, inclusion of image processing methods in CV projects, as well as the reconstruction and interpretation of recorded natural scenes.
This book provides fresh insights into the cutting edge of multimedia data mining, reflecting how the research focus has shifted towards networked social communities, mobile devices and sensors. The work describes how the history of multimedia data processing can be viewed as a sequence of disruptive innovations. Across the chapters, the discussion covers the practical frameworks, libraries, and open source software that enable the development of ground-breaking research into practical applications. Features: reviews how innovations in mobile, social, cognitive, cloud and organic based computing impacts upon the development of multimedia data mining; provides practical details on implementing the technology for solving real-world problems; includes chapters devoted to privacy issues in multimedia social environments and large-scale biometric data processing; covers content and concept based multimedia search and advanced algorithms for multimedia data representation, processing and visualization.
1) Learn how to develop computer vision application algorithms 2) Learn to use software tools for analysis and development 3) Learn underlying processes need for image analysis 4) Learn concepts so that the reader can develop their own algorithms 5) Software tools provided
The book covers the most crucial parts of real-time hyperspectral image processing: causality and real-time capability. Recently, two new concepts of real time hyperspectral image processing, Progressive HyperSpectral Imaging (PHSI) and Recursive HyperSpectral Imaging (RHSI). Both of these can be used to design algorithms and also form an integral part of real time hyperpsectral image processing. This book focuses on progressive nature in algorithms on their real-time and causal processing implementation in two major applications, endmember finding and anomaly detection, both of which are fundamental tasks in hyperspectral imaging but generally not encountered in multispectral imaging. This book is written to particularly address PHSI in real time processing, while a book, Recursive Hyperspectral Sample and Band Processing: Algorithm Architecture and Implementation (Springer 2016) can be considered as its companion book.
Although the capabilities of computer image analysis do not yet match those of the human visual system, recent developments have made great progress towards tackling the challenges posed by the perceptual analysis of images. This unique text/reference highlights a selection of important, practical applications of advanced image analysis methods for medical images. The book covers the complete methodology for processing, analysing and interpreting diagnostic results of sample computed tomography (CT) images. The text also presents significant problems related to new approaches and paradigms in image understanding and semantic image analysis. To further engage the reader, example source code is provided for the implemented algorithms in the described solutions. Topics and features: describes the most important methods and algorithms used for image analysis, including holistic and syntactic methods; examines the fundamentals of cognitive computer image analysis for computer-aided diagnosis and semantic image description, introducing the cognitive resonance model; presents original approaches for the semantic analysis of CT perfusion and CT angiography images of the brain and carotid artery; discusses techniques for creating 3D visualisations of large datasets, and efficient and reliable algorithms for 3D rendering; reviews natural user interfaces in medical imaging systems, covering innovative Gesture Description Language technology; concludes with a summary of significant developments in advanced image recognition techniques and their practical applications, along with possible directions for future research. This cutting-edge work is an invaluable practical resource for researchers and professionals interested in medical informatics, computer-aided diagnosis, computer graphics, and intelligent information systems.
Walks the reader through adaptive approaches to radar signal processing by detailing the basic concepts of various techniques and then developing equations to analyze their performance. Finally, it presents curves that illustrate the attained performance.
This book provides developers, engineers, researchers and students with detailed knowledge about the High Efficiency Video Coding (HEVC) standard. HEVC is the successor to the widely successful H.264/AVC video compression standard, and it provides around twice as much compression as H.264/AVC for the same level of quality. The applications for HEVC will not only cover the space of the well-known current uses and capabilities of digital video they will also include the deployment of new services and the delivery of enhanced video quality, such as ultra-high-definition television (UHDTV) and video with higher dynamic range, wider range of representable color, and greater representation precision than what is typically found today. HEVC is the next major generation of video coding design a flexible, reliable and robust solution that will support the next decade of video applications and ease the burden of video on world-wide network traffic. This book provides a detailed explanation of the various parts of the standard, insight into how it was developed, and in-depth discussion of algorithms and architectures for its implementation."
|
You may like...
Cardiovascular and Coronary Artery…
Ayman S. El-Baz, Jasjit S. Suri
Paperback
R3,802
Discovery Miles 38 020
Image Processing for Automated Diagnosis…
Kalpana Chauhan, Rajeev Kumar Chauhan
Paperback
R3,487
Discovery Miles 34 870
Recent Trends in Computer-aided…
Saptarshi Chatterjee, Debangshu Dey, …
Paperback
R2,570
Discovery Miles 25 700
Intelligent Image and Video Compression…
David R. Bull, Fan Zhang
Paperback
R2,606
Discovery Miles 26 060
Diagnostic Biomedical Signal and Image…
Kemal Polat, Saban Ozturk
Paperback
R2,952
Discovery Miles 29 520
Learn from Scratch Signal and Image…
Rismon Hasiholan Sianipar, Vivian Siahaan
Paperback
R849
Discovery Miles 8 490
Handbook of Medical Image Computing and…
S. Kevin Zhou, Daniel Rueckert, …
Hardcover
R4,574
Discovery Miles 45 740
|