Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Applications of computing > Image processing
This highly practical and self-contained guidebook explains the principles and major applications of digital hologram recording and numerical reconstruction (Digital Holography). A special chapter is designated to digital holographic interferometry with applications in deformation and shape measurement and refractive index determination. Applications in imaging and microscopy are also described. Spcial techniques such as digital light-in-flight holography, holographic endoscopy, information encrypting, comparative holography, and related techniques of speckle metrology are also treated
Visual content understanding is a complex and important challenge for applications in automatic multimedia information indexing, medicine, robotics, and surveillance. Yet the performance of such systems can be improved by the fusion of individual modalities/techniques for content representation and machine learning. This comprehensive text/reference presents a thorough overview of "Fusion in Computer Vision," from an interdisciplinary and multi-application viewpoint. Presenting contributions from an international selection of experts, the work describes numerous successful approaches, evaluated in the context of international benchmarks that model realistic use cases at significant scales. Topics and features: examines late fusion approaches for concept recognition in images and videos, including the bag-of-words model; describes the interpretation of visual content by incorporating models of the human visual system with content understanding methods; investigates the fusion of multi-modal features of different semantic levels, as well as results of semantic concept detections, for example-based event recognition in video; proposes rotation-based ensemble classifiers for high-dimensional data, which encourage both individual accuracy and diversity within the ensemble; reviews application-focused strategies of fusion in video surveillance, biomedical information retrieval, and content detection in movies; discusses the modeling of mechanisms of human interpretation of complex visual content. This authoritative collection is essential reading for researchers and students interested in the domain of information fusion for complex visual content understanding, and related fields.
Walks the reader through adaptive approaches to radar signal processing by detailing the basic concepts of various techniques and then developing equations to analyze their performance. Finally, it presents curves that illustrate the attained performance.
This book, written from the perspective of a designer and educator, brings to the attention of media historians, fellow practitioners and students the innovative practices of leading moving image designers. Moving image design, whether viewed as television and movie title sequences, movie visual effects, animating infographics, branding and advertising, or as an art form, is being increasingly recognised as an important dynamic part of contemporary culture. For many practitioners this has been long overdue. Central to these designers' practice is the hybridisation of digital and heritage methods. Macdonald uses interviews with world-leading motion graphic designers, moving image artists and Oscar nominated visual effects supervisors to examine the hybrid moving image, which re-invigorates both heritage practices and the handmade and analogue crafts. Now is the time to ensure that heritage skills do not atrophy, but that their qualities and provenance are understood as potent components with digital practices in new hybrids.
The Soft Computing techniques, which are based on the information processing of biological systems are now massively used in the area of pattern recognition, making prediction & planning, as well as acting on the environment. Ideally speaking, soft computing is not a subject of homogeneous concepts and techniques; rather, it is an amalgamation of distinct methods that confirms to its guiding principle. At present, the main aim of soft computing is to exploit the tolerance for imprecision and uncertainty to achieve tractability, robustness and low solutions cost. The principal constituents of soft computing techniques are probabilistic reasoning, fuzzy logic, neuro-computing, genetic algorithms, belief networks, chaotic systems, as well as learning theory. This book covers contributions from various authors to demonstrate the use of soft computing techniques in various applications of engineering.
This book constitutes the refereed post-conference proceedings of the 10th IFIP WG 5.14 International Conference on Computer and Computing Technologies in Agriculture, CCTA 2016, held in Dongying, China, in October 2016. The 55 revised papers presented were carefully reviewed and selected from 128 submissions. They cover a wide range of interesting theories and applications of information technology in agriculture, including intelligent sensing, cloud computing, key technologies of the Internet of Things, precision agriculture, animal husbandry information technology, including Internet + modern animal husbandry, livestock big data platform and cloud computing applications, intelligent breeding equipment, precision production models, water product networking and big data , including fishery IoT, intelligent aquaculture facilities, and big data applications.
This illuminating collection offers a fresh look at the very latest advances in the field of embedded computer vision. Emerging areas covered by this comprehensive text/reference include the embedded realization of 3D vision technologies for a variety of applications, such as stereo cameras on mobile devices. Recent trends towards the development of small unmanned aerial vehicles (UAVs) with embedded image and video processing algorithms are also examined. Topics and features: discusses in detail three major success stories - the development of the optical mouse, vision for consumer robotics, and vision for automotive safety; reviews state-of-the-art research on embedded 3D vision, UAVs, automotive vision, mobile vision apps, and augmented reality; examines the potential of embedded computer vision in such cutting-edge areas as the Internet of Things, the mining of large data streams, and in computational sensing; describes historical successes, current implementations, and future challenges.
This book illustrates how to use description logic-based formalisms to their full potential in the creation, indexing, and reuse of multimedia semantics. To do so, it introduces researchers to multimedia semantics by providing an in-depth review of state-of-the-art standards, technologies, ontologies, and software tools. It draws attention to the importance of formal grounding in the knowledge representation of multimedia objects, the potential of multimedia reasoning in intelligent multimedia applications, and presents both theoretical discussions and best practices in multimedia ontology engineering. Readers already familiar with mathematical logic, Internet, and multimedia fundamentals will learn to develop formally grounded multimedia ontologies, and map concept definitions to high-level descriptors. The core reasoning tasks, reasoning algorithms, and industry-leading reasoners are presented, while scene interpretation via reasoning is also demonstrated. Overall, this book offers readers an essential introduction to the formal grounding of web ontologies, as well as a comprehensive collection and review of description logics (DLs) from the perspectives of expressivity and reasoning complexity. It covers best practices for developing multimedia ontologies with formal grounding to guarantee decidability and obtain the desired level of expressivity while maximizing the reasoning potential. The capabilities of such multimedia ontologies are demonstrated by DL implementations with an emphasis on multimedia reasoning applications.
Software, Animation and the Moving Image brings a unique perspective to the study of computer-generated animation by placing interviews undertaken with animators alongside an analysis of the user interface of animation software. Wood develops a novel framework for considering computer-generated images found in visual effects and animations.
This book discusses computational complexity of High Efficiency Video Coding (HEVC) encoders with coverage extending from the analysis of HEVC compression efficiency and computational complexity to the reduction and scaling of its encoding complexity. After an introduction to the topic and a review of the state-of-the-art research in the field, the authors provide a detailed analysis of the HEVC encoding tools compression efficiency and computational complexity. Readers will benefit from a set of algorithms for scaling the computational complexity of HEVC encoders, all of which take advantage from the flexibility of the frame partitioning structures allowed by the standard. The authors also provide a set of early termination methods based on data mining and machine learning techniques, which are able to reduce the computational complexity required to find the best frame partitioning structures. The applicability of the proposed methods is finally exemplified with an encoding time control system that employs the best complexity reduction and scaling methods presented throughout the book. The methods presented in this book are especially useful in power-constrained, portable multimedia devices to reduce energy consumption and to extend battery life. They can also be applied to portable and non-portable multimedia devices operating in real time with limited computational resources.
This book contains papers presented at the 2014 MICCAI Workshop on Computational Diffusion MRI, CDMRI'14. Detailing new computational methods applied to diffusion magnetic resonance imaging data, it offers readers a snapshot of the current state of the art and covers a wide range of topics from fundamental theoretical work on mathematical modeling to the development and evaluation of robust algorithms and applications in neuroscientific studies and clinical practice. Inside, readers will find information on brain network analysis, mathematical modeling for clinical applications, tissue microstructure imaging, super-resolution methods, signal reconstruction, visualization, and more. Contributions include both careful mathematical derivations and a large number of rich full-color visualizations. Computational techniques are key to the continued success and development of diffusion MRI and to its widespread transfer into the clinic. This volume will offer a valuable starting point for anyone interested in learning computational diffusion MRI. It also offers new perspectives and insights on current research challenges for those currently in the field. The book will be of interest to researchers and practitioners in computer science, MR physics, and applied mathematics.
This book looks at the increasing interest in running microscopy processing algorithms on big image data by presenting the theoretical and architectural underpinnings of a web image processing pipeline (WIPP). Software-based methods and infrastructure components for processing big data microscopy experiments are presented to demonstrate how information processing of repetitive, laborious and tedious analysis can be automated with a user-friendly system. Interactions of web system components and their impact on computational scalability, provenance information gathering, interactive display, and computing are explained in a top-down presentation of technical details. Web Microanalysis of Big Image Data includes descriptions of WIPP functionalities, use cases, and components of the web software system (web server and client architecture, algorithms, and hardware-software dependencies). The book comes with test image collections and a web software system to increase the reader's understanding and to provide practical tools for conducting big image experiments. By providing educational materials and software tools at the intersection of microscopy image analyses and computational science, graduate students, postdoctoral students, and scientists will benefit from the practical experiences, as well as theoretical insights. Furthermore, the book provides software and test data, empowering students and scientists with tools to make discoveries with higher statistical significance. Once they become familiar with the web image processing components, they can extend and re-purpose the existing software to new types of analyses. Each chapter follows a top-down presentation, starting with a short introduction and a classification of related methods. Next, a description of the specific method used in accompanying software is presented. For several topics, examples of how the specific method is applied to a dataset (parameters, RAM requirements, CPU efficiency) are shown. Some tips are provided as practical suggestions to improve accuracy or computational performance.
"Digital Preservation Technology for Cultural Heritage" discusses the technology and processes in digital preservation of cultural heritage. It covers topics in five major areas: Digitization of cultural heritage; Digital management in the cultural heritage preservation; Restoration techniques for rigid solid relics; Restoration techniques for paintings; Digital museum. It also includes application examples for digital preservation of cultural heritage. The book is intended for researchers, advanced undergraduate and graduate students in Computer Graphics and Image Processing as well as Cultural heritage preservation. Mingquan Zhou is a professor at the College of Information Science and Technology, Beijing Normal University, China. Guohua Geng is a professor at the College of Information Science and Technology, Northwest University, Xi'an, China. Zhongke Wu is a professor at the College of Information Science and Technology, Beijing Normal University, China.
Blind deconvolution is a classical image processing problem which has been investigated by a large number of researchers over the last four decades. The purpose of this monograph is not to propose yet another method for blind image restoration. Rather the basic issue of deconvolvability has been explored from a theoretical view point. Some authors claim very good results while quite a few claim that blind restoration does not work. The authors clearly detail when such methods are expected to work and when they will not. In order to avoid the assumptions needed for convergence analysis in the Fourier domain, the authors use a general method of convergence analysis used for alternate minimization based on three point and four point properties of the points in the image space. The authors prove that all points in the image space satisfy the three point property and also derive the conditions under which four point property is satisfied. This provides the conditions under which alternate minimization for blind deconvolution converges with a quadratic prior. Since the convergence properties depend on the chosen priors, one should design priors that avoid trivial solutions. Hence, a sparsity based solution is also provided for blind deconvolution, by using image priors having a cost that increases with the amount of blur, which is another way to prevent trivial solutions in joint estimation. This book will be a highly useful resource to the researchers and academicians in the specific area of blind deconvolution.
With an emphasis on applications of computational models for solving modern challenging problems in biomedical and life sciences, this book aims to bring collections of articles from biologists, medical/biomedical and health science researchers together with computational scientists to focus on problems at the frontier of biomedical and life sciences. The goals of this book are to build interactions of scientists across several disciplines and to help industrial users apply advanced computational techniques for solving practical biomedical and life science problems. This book is for users in the fields of biomedical and life sciences who wish to keep abreast with the latest techniques in signal and image analysis. The book presents a detailed description to each of the applications. It can be used by those both at graduate and specialist levels.
This book reports on the theoretical foundations, fundamental applications and latest advances in various aspects of connected services for health information systems. The twelve chapters highlight state-of-the-art approaches, methodologies and systems for the design, development, deployment and innovative use of multisensory systems and tools for health management in smart city ecosystems. They exploit technologies like deep learning, artificial intelligence, augmented and virtual reality, cyber physical systems and sensor networks. Presenting the latest developments, identifying remaining challenges, and outlining future research directions for sensing, computing, communications and security aspects of connected health systems, the book will mainly appeal to academic and industrial researchers in the areas of health information systems, smart cities, and augmented reality.
This book introduces Local Binary Patterns (LBP), arguably one of the most powerful texture descriptors, and LBP variants. This volume provides the latest reviews of the literature and a presentation of some of the best LBP variants by researchers at the forefront of textual analysis research and research on LBP descriptors and variants. The value of LBP variants is illustrated with reported experiments using many databases representing a diversity of computer vision applications in medicine, biometrics, and other areas. There is also a chapter that provides an excellent theoretical foundation for texture analysis and LBP in particular. A special section focuses on LBP and LBP variants in the area of face recognition, including thermal face recognition. This book will be of value to anyone already in the field as well as to those interested in learning more about this powerful family of texture descriptors.
Closed Circuit TeleVision (CCTV) cameras have been increasingly deployed pervasively in public spaces including retail centres and shopping malls. Intelligent video analytics aims to automatically analyze content of massive amount of public space video data and has been one of the most active areas of computer vision research in the last two decades. Current focus of video analytics research has been largely on detecting alarm events and abnormal behaviours for public safety and security applications. However, increasingly CCTV installations have also been exploited for gathering and analyzing business intelligence information, in order to enhance marketing and operational efficiency. For example, in retail environments, surveillance cameras can be utilised to collect statistical information about shopping behaviour and preference for marketing (e.g., how many people entered a shop; how many females/males or which age groups of people showed interests to a particular product; how long did they stay in the shop; and what are the frequent paths), and to measure operational efficiency for improving customer experience. Video analytics has the enormous potential for non-security oriented commercial applications. This book presents the latest developments on video analytics for business intelligence applications. It provides both academic and commercial practitioners an understanding of the state-of-the-art and a resource for potential applications and successful practice.
The integration of the 3rd dimension in the production of spatial representation is largely recognized as a valuable approach to comprehend our reality, that is 3D. During the last decade developments in 3D Geoinformation (GI) system have made substantial progress. We are about to have a more complete spatial model and understanding of our planet in different scales. Hence, various communities and cities offer 3D landscape and 3D city models as valuable source and instrument for sustainable management of rural and urban resources. Also municipal utilities, real estate companies benefit from recent developments related to 3D applications. In order to present recent developments and to discuss future trends, academics and practitioners met at the 7th International Workshop on 3D Geoinformation. This book comprises a selection of evaluated, high quality papers that were presented at this workshop in May 2012. The topics focus explicitly on the last achievements (methods, algorithms, models, systems) with respect to 3D GeoInformation requirements. The book is aimed at decision makers and experts as well at students interested in the 3D component of geographical information science including GI engineers, computer scientists, photogrammetrists, land surveyors, urban planners, and mapping specialists.
This book examines paintings using a computational and quantitative approach. Specifically, it compares paintings to photographs, addressing the strengths and limitations of both. Particular aesthetic practices are examined such as the vista, foreground to background organisation and the depth planes. These are analysed using a range of computational approaches and clear observations are made. New generations of image-capture devices such as Google goggles and the light field camera, promise a future in which the formal attributes of a photograph are made available for editing to a degree that has hitherto been the exclusive territory of painting. In this sense paintings and photographs are converging, and it therefore seems an opportune time to study the comparisons between them. In this context, the book includes cutting-edge work examining how some of the aesthetic attributes of a painting can be transferred to a photograph using the latest computational approaches.
In recent years, the paradigm of video coding has shifted from that
of a frame-based approach to a content-based approach, particularly
with the finalization of the ISO multimedia coding standard,
MPEG-4. MPEG-4 is the emerging standard for the coding of
multimedia content. It defines a syntax for a set of content-based
functionalities, namely, content-based interactivity, compression
and universal access. However, it does not specify how the video
content is to be generated. To generate the video content, video
has to be segmented into video objects and tracked as they
transverse across the video frames. This book addresses the
difficult problem of video segmentation, and the extraction and
tracking of video object planes as defined in MPEG-4. It then
focuses on the specific issue of face segmentation and coding as
applied to videoconferencing in order to improve the quality of
videoconferencing images especially in the facial region.
This book presents studies involving algorithms in the machine learning paradigms. It discusses a variety of learning problems with diverse applications, including prediction, concept learning, explanation-based learning, case-based (exemplar-based) learning, statistical rule-based learning, feature extraction-based learning, optimization-based learning, quantum-inspired learning, multi-criteria-based learning and hybrid intelligence-based learning.
This book presents non-linear image enhancement approaches to mammograms as a robust computer-aided analysis solution for the early detection of breast cancer, and provides a compendium of non-linear mammogram enhancement approaches: from the fundamentals to research challenges, practical implementations, validation, and advances in applications. The book includes a comprehensive discussion on breast cancer, mammography, breast anomalies, and computer-aided analysis of mammograms. It also addresses fundamental concepts of mammogram enhancement and associated challenges, and features a detailed review of various state-of-the-art approaches to the enhancement of mammographic images and emerging research gaps. Given its scope, the book offers a valuable asset for radiologists and medical experts (oncologists), as mammogram visualization can enhance the precision of their diagnostic analyses; and for researchers and engineers, as the analysis of non-linear filters is one of the most challenging research domains in image processing.
Vision-based control of wheeled mobile robots is an interesting field of research from a scientific and even social point of view due to its potential applicability. This book presents a formal treatment of some aspects of control theory applied to the problem of vision-based pose regulation of wheeled mobile robots. In this problem, the robot has to reach a desired position and orientation, which are specified by a target image. It is faced in such a way that vision and control are unified to achieve stability of the closed loop, a large region of convergence, without local minima and good robustness against parametric uncertainty. Three different control schemes that rely on monocular vision as unique sensor are presented and evaluated experimentally. A common benefit of these approaches is that they are valid for imaging systems obeying approximately a central projection model, e.g., conventional cameras, catadioptric systems and some fisheye cameras. Thus, the presented control schemes are generic approaches. A minimum set of visual measurements, integrated in adequate task functions, are taken from a geometric constraint imposed between corresponding image features. Particularly, the epipolar geometry and the trifocal tensor are exploited since they can be used for generic scenes. A detailed experimental evaluation is presented for each control scheme.
A 3D user interface (3DUI) is an interface in which the user performs tasks in three dimensions. For example, interactions using hand/body gestures, interaction using a motion controller (e.g. Sony PlayStation Move), interaction with virtual reality devices using tracked motion controllers, etc. All these technologies which let a user interact in three dimensions are called 3D user interface technologies. These 3D user interfaces have the potential to make games more immersive & engaging and thus potentially provide a better user experience to gamers. Although 3D user interface technologies are available for games, it is unclear how their usage affects game play and if there are any user performance benefits. This book presents state of the art research on exploring 3D user interface technologies for improving video games. It also presents a review of research work done in this area and describes experiments focused on usage of stereoscopic 3D, head tracking, and hand gesture-based control in gaming scenarios. These experiments are systematic studies in gaming environments and are aimed at understanding the effect of the underlined 3D interface technology on the gaming experience of a user. Based on these experiments, several design guidelines are presented which can aid game designers in designing better immersive games. |
You may like...
Vision, Sensing and Analytics…
MD Atiqur Rahman Ahad, Atsushi Inoue
Hardcover
R5,016
Discovery Miles 50 160
The Animator's Survival Kit: Dialogue…
Richard E. Williams
Paperback
Digital Image Processing Applications
Paulo E. Ambrosio
Hardcover
Advancements in Bio-Medical Image…
Rijwan Khan, Indrajeet Kumar
Hardcover
R8,408
Discovery Miles 84 080
Promoting Economic and Social…
Oscar Bernardes, Vanessa Amorim
Hardcover
R7,022
Discovery Miles 70 220
|