![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Image processing > General
The arrival of the digital age has created the need to be able to store, manage, and digitally use an ever-increasing amount of video and audio material. Thus, video cataloguing has emerged as a requirement of the times. Video Cataloguing: Structure Parsing and Content Extraction explains how to efficiently perform video structure analysis as well as extract the basic semantic contents for video summarization, which is essential for handling large-scale video data. This book addresses the issues of video cataloguing, including video structure parsing and basic semantic word extraction, particularly for movie and teleplay videos. It starts by providing readers with a fundamental understanding of video structure parsing. It examines video shot boundary detection, recent research on video scene detection, and basic ideas for semantic word extraction, including video text recognition, scene recognition, and character identification. The book lists and introduces some of the most commonly used features in video analysis. It introduces and analyzes the most popular shot boundary detection methods and also presents recent research on movie scene detection as another important and critical step for video cataloguing, video indexing, and retrieval. The authors propose a robust movie scene recognition approach based on a panoramic frame and representative feature patch. They describe how to recognize characters in movies and TV series accurately and efficiently as well as how to use these character names as cataloguing items for an intelligent catalogue. The book proposes an interesting application of highlight extraction in basketball videos and concludes by demonstrating how to design and implement a prototype system of automatic movie and teleplay cataloguing (AMTC) based on the approaches introduced in the book.
No matter how advanced the technology, there is always the human factor involved - the power behind the technology. Interpreting Remote Sensing Imagery: Human Factors draws together leading psychologists, remote sensing scientists, and government and industry scientists to consider the factors involved in expertise and perceptual skill. This book covers the cognitive issues of learning, perception, and expertise, the applied issues of display design, interface design, software design, and mental workload issues, and the practitioner's issues of workstation design, human performance, and training. It tackles the intangibles of data interpretation, based on information from experts who do the job. You will learn: Information and perception What do experts perceive in remote sensing and cartographic displays? Reasoning and perception How do experts "see through" the data display to understand its meaning and significance? Human-computer interaction How do experts work with their displays and what happens when the "fiddle" with them? Learning and training What are the milestones in training development from novice to expert image interpreter? Interpreting Remote Sensing Imagery: Human Factors breaks down the mystery of what experts do when they interpret data, how they learn, and what individual factors speed or impede training. Even more importantly, it gives you the tools to train efficiently and understand how the human factor impacts data interpretation.
Regularization becomes an integral part of the reconstruction process in accelerated parallel magnetic resonance imaging (pMRI) due to the need for utilizing the most discriminative information in the form of parsimonious models to generate high quality images with reduced noise and artifacts. Apart from providing a detailed overview and implementation details of various pMRI reconstruction methods, Regularized image reconstruction in parallel MRI with MATLAB examples interprets regularized image reconstruction in pMRI as a means to effectively control the balance between two specific types of error signals to either improve the accuracy in estimation of missing samples, or speed up the estimation process. The first type corresponds to the modeling error between acquired and their estimated values. The second type arises due to the perturbation of k-space values in autocalibration methods or sparse approximation in the compressed sensing based reconstruction model. Features: Provides details for optimizing regularization parameters in each type of reconstruction. Presents comparison of regularization approaches for each type of pMRI reconstruction. Includes discussion of case studies using clinically acquired data. MATLAB codes are provided for each reconstruction type. Contains method-wise description of adapting regularization to optimize speed and accuracy. This book serves as a reference material for researchers and students involved in development of pMRI reconstruction methods. Industry practitioners concerned with how to apply regularization in pMRI reconstruction will find this book most useful.
The book consists of 29 extended chapters which have been selected and invited from the submissions to the "1st International Conference on Computer Science, Applied Mathematics and Applications" (ICCSAMA 2013) held on 9-10 May, 2013 in Warsaw, Poland. The book is organized into five parts, which are: Advanced Optimization Methods and Their Applications, Queuing Theory and Applications, Computational Methods for Knowledge Engineering, Knowledge Engineering with Cloud and Grid Computing, and Logic Based Methods for Decision Making and Data Mining, respectively. All chapters in the book discuss theoretical and practical issues connected with computational methods and optimization methods for knowledge engineering.
Image Processing for Cinema presents a detailed overview of image processing techniques that are used in practice in digital cinema. The book shows how image processing has become ubiquitous in movie-making, from shooting to exhibition. It covers all the ways in which image processing algorithms are used to enhance, restore, adapt, and convert moving images. These techniques and algorithms make the images look as good as possible while exploiting the capabilities of cameras, projectors, and displays. The author focuses on the ideas behind the methods, rather than proofs and derivations. The first part of the text presents fundamentals on optics and color. The second part explains how cameras work and details all the image processing algorithms that are applied in-camera. With an emphasis on state-of-the-art methods that are actually used in practice, the last part describes image processing algorithms that are applied offline to solve a variety of problems. The book is designed for advanced undergraduate and graduate students in applied mathematics, image processing, computer science, and related fields. It is also suitable for academic researchers and professionals in the movie industry.
A Versatile Framework for Handling Subdivided Geometric Objects Combinatorial Maps: Efficient Data Structures for Computer Graphics and Image Processing gathers important ideas related to combinatorial maps and explains how the maps are applied in geometric modeling and image processing. It focuses on two subclasses of combinatorial maps: n-Gmaps and n-maps. Suitable for researchers and graduate students in geometric modeling, computational and discrete geometry, computer graphics, and image processing and analysis, the book presents the data structures, operations, and algorithms that are useful in handling subdivided geometric objects. It shows how to study data structures for the explicit representation of subdivided geometric objects and describes operations for handling the structures. The book also illustrates results of the design of data structures and operations.
Image and video signals require large transmission bandwidth and storage, leading to high costs. The data must be compressed without a loss or with a small loss of quality. Thus, efficient image and video compression algorithms play a significant role in the storage and transmission of data. Image and Video Compression: Fundamentals, Techniques, and Applications explains the major techniques for image and video compression and demonstrates their practical implementation using MATLAB (R) programs. Designed for students, researchers, and practicing engineers, the book presents both basic principles and real practical applications. In an accessible way, the book covers basic schemes for image and video compression, including lossless techniques and wavelet- and vector quantization-based image compression and digital video compression. The MATLAB programs enable readers to gain hands-on experience with the techniques. The authors provide quality metrics used to evaluate the performance of the compression algorithms. They also introduce the modern technique of compressed sensing, which retains the most important part of the signal while it is being sensed.
This book emphasizes various image shape feature extraction methods which are necessary for image shape recognition and classification. Focussing on a shape feature extraction technique used in content-based image retrieval (CBIR), it explains different applications of image shape features in the field of content-based image retrieval. Showcasing useful applications and illustrating examples in many interdisciplinary fields, the present book is aimed at researchers and graduate students in electrical engineering, data science, computer science, medicine, and machine learning including medical physics and information technology.
This book is a complete introduction to vector analysis, especially within the context of computer graphics. The author shows why vectors are useful and how it is possible to develop analytical skills in manipulating vector algebra. Even though vector analysis is a relatively recent development in the history of mathematics, it has become a powerful and central tool in describing and solving a wide range of geometric problems. The book is divided into eleven chapters covering the mathematical foundations of vector algebra and its application to, among others, lines, planes, intersections, rotating vectors, and vector differentiation.
Digital signal processing (DSP) covers a wide range of applications such as signal acquisition, analysis, transmission, storage, and synthesis. Special attention is needed for the VLSI (very large scale integration) implementation of high performance DSP systems with examples from video and radar applications. This book provides basic architectures for VLSI implementations of DSP tasks covering architectures for application specific circuits and programmable DSP circuits. It fills an important gap in the literature by focusing on the transition from algorithms specification to architectures for VLSI implementations. Areas covered include:
This book presents a systematic approach to the implementation of Internet of Things (IoT) devices achieving visual inference through deep neural networks. Practical aspects are covered, with a focus on providing guidelines to optimally select hardware and software components as well as network architectures according to prescribed application requirements. The monograph includes a remarkable set of experimental results and functional procedures supporting the theoretical concepts and methodologies introduced. A case study on animal recognition based on smart camera traps is also presented and thoroughly analyzed. In this case study, different system alternatives are explored and a particular realization is completely developed. Illustrations, numerous plots from simulations and experiments, and supporting information in the form of charts and tables make Visual Inference and IoT Systems: A Practical Approach a clear and detailed guide to the topic. It will be of interest to researchers, industrial practitioners, and graduate students in the fields of computer vision and IoT.
With 300 figures, tables, and equations, this book presents a unified approach to image quality research and modeling. The author discusses the results of different, calibrated psychometric experiments can be rigorously integrated to construct predictive software using Monte Carlo simulations and provides numerous examples of viable field applications for product design and verification of modeling predictions. He covers perceptual measurements for the assessment of individual quality attributes and overall quality, explores variation in scene susceptibility, observer sensitivity, and preference, and includes methods of analysis for testing and refining metrics based on psychometric data.
This book is a testimony to Evgeny Nikolaevich Sokolov's years of
work in developing knowledge in the areas of perception,
information processing and attention, and to the research it has
spawned. It presents a historical account of a research program,
leading the reader toward a cognitive science approach to the study
of perception and attention. An understanding of neuroscience and
mathematical modeling are helpful prerequisites. The co-authors
collected data on orienting, attention, and information processing
in the brain using single-cell recordings, central, autonomic,
cognitive, behavioral, and verbal measures. This commonality
brought them together for a series of meetings which resulted in
the production of this book. The book ends with a review of some of
the co-authors studies that have developed from or in parallel with
Sokolov's research. They investigate, in particular, the concepts
of attention and anticipation using a psychophysiological
methodology.
For many centuries, mankind has tried to learn about his health. Initially, during the pre-technological period, he could only rely on his senses. Then there were simple tools to help the senses. The breakthrough turned out to be the discovery of X-rays, which gave insight into the human body. Contemporary medical diagnostics are increasingly supported by information technology, which for example offers a very thorough analysis of the tissue image or the pathology differentiation. It also offers possibilities for very early preventive diagnosis. Under the influence of information technology, 'traditional' diagnostic techniques and new ones are changing. More and more often the same methods can be used for both medical and technical diagnostics. In addition, methodologies are developed that are inspired by the functioning of living organisms. Information Technology in Medical Diagnostics II is the second volume in a series showing the latest advances in information technologies directly or indirectly applied to medical diagnostics. Unlike the previous book, this volume does not contain closed chapters, but rather extended versions of presentations made during two conferences: XLVIII International Scientific and Practical Conference 'Application of Lasers in Medicine and Biology' (Kharkov, Ukraine) and the International Scientific Internet conference 'Computer graphics and image processing' (Vinnitsa, Ukriane), both held in May 2018. Information Technology in Medical Diagnostics II links technological issues to medical and biological issues, and will be valuable to academics and professionals interested in medical diagnostics and IT.
Convergence in Broadcast and Communications Media offers concise
and accurate information for engineers and technicians tackling
products and systems combining audio, video, data processing and
communications. Without adequate fundamental knowledge of the core
technologies, products could be flawed or even fail. John Watkinson
has provided a definitive professional guide, designed as a
standard point of reference for engineers, whether you are from an
audio, video, computer or communications background.
The main thrust is to provide students with a solid understanding of a number of important and related advanced topics in digital signal processing such as Wiener filters, power spectrum estimation, signal modeling and adaptive filtering. Scores of worked examples illustrate fine points, compare techniques and algorithms and facilitate comprehension of fundamental concepts. Also features an abundance of interesting and challenging problems at the end of every chapter.
Image Analysis, Classification and Change Detection in Remote Sensing: With Algorithms for Python, Fourth Edition, is focused on the development and implementation of statistically motivated, data-driven techniques for digital image analysis of remotely sensed imagery and it features a tight interweaving of statistical and machine learning theory of algorithms with computer codes. It develops statistical methods for the analysis of optical/infrared and synthetic aperture radar (SAR) imagery, including wavelet transformations, kernel methods for nonlinear classification, as well as an introduction to deep learning in the context of feed forward neural networks. New in the Fourth Edition: An in-depth treatment of a recent sequential change detection algorithm for polarimetric SAR image time series. The accompanying software consists of Python (open source) versions of all of the main image analysis algorithms. Presents easy, platform-independent software installation methods (Docker containerization). Utilizes freely accessible imagery via the Google Earth Engine and provides many examples of cloud programming (Google Earth Engine API). Examines deep learning examples including TensorFlow and a sound introduction to neural networks, Based on the success and the reputation of the previous editions and compared to other textbooks in the market, Professor Canty's fourth edition differs in the depth and sophistication of the material treated as well as in its consistent use of computer codes to illustrate the methods and algorithms discussed. It is self-contained and illustrated with many programming examples, all of which can be conveniently run in a web browser. Each chapter concludes with exercises complementing or extending the material in the text.
This monograph is devoted to the description of the physical fundamentals of laser refractography-a novel informational-measuring technique for the diagnostics of optically inhomogeneous media and flows, based on the idea of using spatially structured probe laser radiation in combination with its digital recording and c- puter techniques for the differential processing of refraction patterns. Considered are the physical fundamentals of this technique, actual optical schemes, methods of processing refraction patterns, and possible applications. This informational technique can be employed in such areas of science and technology as require remote nonperturbative monitoring of optical, thermophysical, chemical, aerohydrodynamic, and manufacturing processes. The monograph can also be recommended for students and postgraduates of - formational, laser, electro-optical, thermophysical, chemical, and other specialties. Laser refractography is a conceptually novel refraction method for the diagn- tics of inhomogeneous media, based on the idea of using spatially structured probe laser radiation in combination with its digital recording and computer techniques for the differential processing of refraction patterns.
"This text covers key mathematical principles and algorithms for
nonlinear filters used in image processing. Readers will gain an
in-depth understanding of the underlying mathematical and filter
design methodologies needed to construct and use nonlinear filters
in a variety of applications.
This book describes algorithms and hardware implementations of computer holography, especially in terms of fast calculation. It summarizes the basics of holography and computer holography and describes how conventional diffraction calculations play a central role. Numerical implementations by actual codes will also be discussed. This book will explain new fast diffraction calculations, such as scaled scalar diffraction. Computer Holography will also explain acceleration algorithms for computer-generated hologram (CGH) generation and digital holography with 3D objects composed of point clouds, using look-up table- (LUT) based algorithms, and a wave front recording plane. 3D objects composed of polygons using tilted plane diffraction, expressed by multi-view images and RGB-D images, will be explained in this book. Digital holography, including inline, off-axis, Gabor digital holography, and phase shift digital holography, will also be explored. This book introduces applications of computer holography, including phase retrieval algorithm, holographic memory, holographic projection, and deep learning in computer holography, while explaining hardware implementations for computer holography. Recently, several parallel processors have been released (for example, multi-core CPU, GPU, Xeon Phi, and FPGA). Readers will learn how to apply algorithms to these processors. Features Provides an introduction of the basics of holography and computer holography Summarizes the latest advancements in computer-generated holograms Showcases the latest researchers of digital holography Discusses fast CGH algorithms and diffraction calculations, and their actual codes Includes hardware implementation for computer holography, and its actual codes and quasi-codes
Image algebra is a comprehensive, unifying theory of image transformations, image analysis, and image understanding. In 1996, the bestselling first edition of the Handbook of Computer Vision Algorithms in Image Algebra introduced engineers, scientists, and students to this powerful tool, its basic concepts, and its use in the concise representation of computer vision algorithms.
Sixty years after its birth, Synthetic Aperture Radar (SAR) evolved as a key player of earth observation, and it is continually upgraded by enhanced hardware functionality and improved overall performance in response to user requirements. The basic information gained by SAR includes the backscattering coefficient of targets, their phases (the truncated distance between SAR and its targets), and their polarization dependence. The spatiotemporal combination of the multiple data operated on the satellite or aircraft significantly increases its sensitivity to detect changes on earth, including temporal variations of the planet in amplitude and the interferometric change for monitoring disasters; deformations caused by earthquakes, volcanic activity, and landslides; environmental changes; ship detection; and so on. Earth-orbiting satellites with the appropriate sensors can detect environmental changes because of their large spatial coverage and availability. Imaging from Spaceborne and Airborne SARs, Calibration, and Applications provides A-to-Z information regarding SAR researches through 15 chapters that focus on the JAXA L-band SAR, including hardware description, principles of SAR imaging, theoretical description of SAR imaging and error, ScanSAR imaging, polarimetric calibration, inflight antenna pattern, SAR geometry and ortho rectification, SAR calibration, defocusing for moving targets, large-scale SAR imaging and mosaic, interferometric SAR processing, irregularities, application, and forest estimation. Sample data are created by using L-band SAR, JERS-1, PALSAR, PALSAR-2, and Pi-SAR-L2. This book is based on the author's experience as a principal researcher at JAXA with responsibilities for L-band SAR operation and researches. It reveals the inside of SAR processing and application researches performed at JAXA, which makes this book a valuable reference for a wide range of SAR researchers, professionals, and students.
Nowadays, intelligent techniques are more and more used in Computer Graphics in order to optimise the processing time, to find more accurate solutions for a lot of Computer Graphics problems, than with traditional methods, or simply to find solutions in problems where traditional methods fail. The purpose of this volume is to present current work of the Intelligent Computer Graphics community, a community growing up year after year. This volume is a kind of continuation of the previously published Springer volumes "Artificial Intelligence Techniques for Computer Graphics" (2008) and "Intelligent Computer Graphics 2009" (2009). This volume contains selected extended papers from the last 3IA Conference (3IA'2010), which has been held in Athens (Greece) in May 2010. This year papers are particularly exciting and concern areas like rendering, viewpoint quality, data visualisation, vision, computational aesthetics, scene understanding, intelligent lighting, declarative modelling, GIS, scene reconstruction and other important themes.
Highlights the Emergence of Image Processing in Food and Agriculture In addition to uses specifically related to health and other industries, biological imaging is now being used for a variety of applications in food and agriculture. Bio-Imaging: Principles, Techniques, and Applications fully details and outlines the processes of bio-imaging applicable to food and agriculture, and connects other bio-industries, as well as relevant topics. Due to the noncontact and nondestructive nature of the technology, biological imaging uses unaltered samples, and allows for internal quality evaluation and the detection of defects. Compared to conventional methods, biological imaging produces results that are more consistent and reliable, and can ensure quality monitoring for a variety of practices used in food and agriculture industries as well as many other biological industries. The book highlights every imaging technique available along with their components, image acquisition procedures, advantages, and comparisons to other approaches. Describes essential components of imaging technique in great detail Incorporates case studies in appropriate chapters Contains a wide range of applications from a number of biological fields Bio-Imaging: Principles, Techniques, and Applications focuses on the imaging techniques for biological materials and the application of biological imaging. This technology, which is quickly becoming a standard practice in agriculture and food-related industries, can aid in enhanced process efficiency, quality assurance, and food safety management overall.
Corpus Annotation gives an up-to-date picture of this fascinating new area of research, and will provide essential reading for newcomers to the field as well as those already involved in corpus annotation. Early chapters introduce the different levels and techniques of corpus annotation. Later chapters deal with software developments, applications, and the development of standards for the evaluation of corpus annotation. While the book takes detailed account of research world-wide, its focus is particularly on the work of the UCREL (University Centre for Computer Corpus Research on Language) team at Lancaster University, which has been at the forefront of developments in the field of corpus annotation since its beginnings in the 1970s. |
![]() ![]() You may like...
2nd EAI International Conference on Big…
Anandakumar Haldorai, Arulmurugan Ramu, …
Hardcover
R5,966
Discovery Miles 59 660
Fundamentals of Resource Allocation in…
Slawomir Stanczak, Marcin Wiczanowski, …
Hardcover
R3,297
Discovery Miles 32 970
Toward Robotic Socially Believable…
Anna Esposito, Lakhmi C. Jain
Hardcover
Advanced Research and Trends in New…
Francisco Vicente Cipolla-Ficarra
Hardcover
R7,090
Discovery Miles 70 900
Analytical Methods in Petroleum Upstream…
Cesar Ovalles, Carl E. Rechsteiner
Hardcover
R3,450
Discovery Miles 34 500
Natural Computing for Unsupervised…
Xiangtao Li, Ka-Chun Wong
Hardcover
R3,048
Discovery Miles 30 480
|