![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Image processing
This book constitutes the refereed post-conference proceedings of the 10th IFIP WG 5.14 International Conference on Computer and Computing Technologies in Agriculture, CCTA 2016, held in Dongying, China, in October 2016. The 55 revised papers presented were carefully reviewed and selected from 128 submissions. They cover a wide range of interesting theories and applications of information technology in agriculture, including intelligent sensing, cloud computing, key technologies of the Internet of Things, precision agriculture, animal husbandry information technology, including Internet + modern animal husbandry, livestock big data platform and cloud computing applications, intelligent breeding equipment, precision production models, water product networking and big data , including fishery IoT, intelligent aquaculture facilities, and big data applications.
This book discusses computational complexity of High Efficiency Video Coding (HEVC) encoders with coverage extending from the analysis of HEVC compression efficiency and computational complexity to the reduction and scaling of its encoding complexity. After an introduction to the topic and a review of the state-of-the-art research in the field, the authors provide a detailed analysis of the HEVC encoding tools compression efficiency and computational complexity. Readers will benefit from a set of algorithms for scaling the computational complexity of HEVC encoders, all of which take advantage from the flexibility of the frame partitioning structures allowed by the standard. The authors also provide a set of early termination methods based on data mining and machine learning techniques, which are able to reduce the computational complexity required to find the best frame partitioning structures. The applicability of the proposed methods is finally exemplified with an encoding time control system that employs the best complexity reduction and scaling methods presented throughout the book. The methods presented in this book are especially useful in power-constrained, portable multimedia devices to reduce energy consumption and to extend battery life. They can also be applied to portable and non-portable multimedia devices operating in real time with limited computational resources.
This book illustrates how to use description logic-based formalisms to their full potential in the creation, indexing, and reuse of multimedia semantics. To do so, it introduces researchers to multimedia semantics by providing an in-depth review of state-of-the-art standards, technologies, ontologies, and software tools. It draws attention to the importance of formal grounding in the knowledge representation of multimedia objects, the potential of multimedia reasoning in intelligent multimedia applications, and presents both theoretical discussions and best practices in multimedia ontology engineering. Readers already familiar with mathematical logic, Internet, and multimedia fundamentals will learn to develop formally grounded multimedia ontologies, and map concept definitions to high-level descriptors. The core reasoning tasks, reasoning algorithms, and industry-leading reasoners are presented, while scene interpretation via reasoning is also demonstrated. Overall, this book offers readers an essential introduction to the formal grounding of web ontologies, as well as a comprehensive collection and review of description logics (DLs) from the perspectives of expressivity and reasoning complexity. It covers best practices for developing multimedia ontologies with formal grounding to guarantee decidability and obtain the desired level of expressivity while maximizing the reasoning potential. The capabilities of such multimedia ontologies are demonstrated by DL implementations with an emphasis on multimedia reasoning applications.
This book contains papers presented at the 2014 MICCAI Workshop on Computational Diffusion MRI, CDMRI'14. Detailing new computational methods applied to diffusion magnetic resonance imaging data, it offers readers a snapshot of the current state of the art and covers a wide range of topics from fundamental theoretical work on mathematical modeling to the development and evaluation of robust algorithms and applications in neuroscientific studies and clinical practice. Inside, readers will find information on brain network analysis, mathematical modeling for clinical applications, tissue microstructure imaging, super-resolution methods, signal reconstruction, visualization, and more. Contributions include both careful mathematical derivations and a large number of rich full-color visualizations. Computational techniques are key to the continued success and development of diffusion MRI and to its widespread transfer into the clinic. This volume will offer a valuable starting point for anyone interested in learning computational diffusion MRI. It also offers new perspectives and insights on current research challenges for those currently in the field. The book will be of interest to researchers and practitioners in computer science, MR physics, and applied mathematics.
"Digital Preservation Technology for Cultural Heritage" discusses the technology and processes in digital preservation of cultural heritage. It covers topics in five major areas: Digitization of cultural heritage; Digital management in the cultural heritage preservation; Restoration techniques for rigid solid relics; Restoration techniques for paintings; Digital museum. It also includes application examples for digital preservation of cultural heritage. The book is intended for researchers, advanced undergraduate and graduate students in Computer Graphics and Image Processing as well as Cultural heritage preservation. Mingquan Zhou is a professor at the College of Information Science and Technology, Beijing Normal University, China. Guohua Geng is a professor at the College of Information Science and Technology, Northwest University, Xi'an, China. Zhongke Wu is a professor at the College of Information Science and Technology, Beijing Normal University, China.
With an emphasis on applications of computational models for solving modern challenging problems in biomedical and life sciences, this book aims to bring collections of articles from biologists, medical/biomedical and health science researchers together with computational scientists to focus on problems at the frontier of biomedical and life sciences. The goals of this book are to build interactions of scientists across several disciplines and to help industrial users apply advanced computational techniques for solving practical biomedical and life science problems. This book is for users in the fields of biomedical and life sciences who wish to keep abreast with the latest techniques in signal and image analysis. The book presents a detailed description to each of the applications. It can be used by those both at graduate and specialist levels.
Blind deconvolution is a classical image processing problem which has been investigated by a large number of researchers over the last four decades. The purpose of this monograph is not to propose yet another method for blind image restoration. Rather the basic issue of deconvolvability has been explored from a theoretical view point. Some authors claim very good results while quite a few claim that blind restoration does not work. The authors clearly detail when such methods are expected to work and when they will not. In order to avoid the assumptions needed for convergence analysis in the Fourier domain, the authors use a general method of convergence analysis used for alternate minimization based on three point and four point properties of the points in the image space. The authors prove that all points in the image space satisfy the three point property and also derive the conditions under which four point property is satisfied. This provides the conditions under which alternate minimization for blind deconvolution converges with a quadratic prior. Since the convergence properties depend on the chosen priors, one should design priors that avoid trivial solutions. Hence, a sparsity based solution is also provided for blind deconvolution, by using image priors having a cost that increases with the amount of blur, which is another way to prevent trivial solutions in joint estimation. This book will be a highly useful resource to the researchers and academicians in the specific area of blind deconvolution.
This book looks at the increasing interest in running microscopy processing algorithms on big image data by presenting the theoretical and architectural underpinnings of a web image processing pipeline (WIPP). Software-based methods and infrastructure components for processing big data microscopy experiments are presented to demonstrate how information processing of repetitive, laborious and tedious analysis can be automated with a user-friendly system. Interactions of web system components and their impact on computational scalability, provenance information gathering, interactive display, and computing are explained in a top-down presentation of technical details. Web Microanalysis of Big Image Data includes descriptions of WIPP functionalities, use cases, and components of the web software system (web server and client architecture, algorithms, and hardware-software dependencies). The book comes with test image collections and a web software system to increase the reader's understanding and to provide practical tools for conducting big image experiments. By providing educational materials and software tools at the intersection of microscopy image analyses and computational science, graduate students, postdoctoral students, and scientists will benefit from the practical experiences, as well as theoretical insights. Furthermore, the book provides software and test data, empowering students and scientists with tools to make discoveries with higher statistical significance. Once they become familiar with the web image processing components, they can extend and re-purpose the existing software to new types of analyses. Each chapter follows a top-down presentation, starting with a short introduction and a classification of related methods. Next, a description of the specific method used in accompanying software is presented. For several topics, examples of how the specific method is applied to a dataset (parameters, RAM requirements, CPU efficiency) are shown. Some tips are provided as practical suggestions to improve accuracy or computational performance.
In recent years, the paradigm of video coding has shifted from that
of a frame-based approach to a content-based approach, particularly
with the finalization of the ISO multimedia coding standard,
MPEG-4. MPEG-4 is the emerging standard for the coding of
multimedia content. It defines a syntax for a set of content-based
functionalities, namely, content-based interactivity, compression
and universal access. However, it does not specify how the video
content is to be generated. To generate the video content, video
has to be segmented into video objects and tracked as they
transverse across the video frames. This book addresses the
difficult problem of video segmentation, and the extraction and
tracking of video object planes as defined in MPEG-4. It then
focuses on the specific issue of face segmentation and coding as
applied to videoconferencing in order to improve the quality of
videoconferencing images especially in the facial region.
Closed Circuit TeleVision (CCTV) cameras have been increasingly deployed pervasively in public spaces including retail centres and shopping malls. Intelligent video analytics aims to automatically analyze content of massive amount of public space video data and has been one of the most active areas of computer vision research in the last two decades. Current focus of video analytics research has been largely on detecting alarm events and abnormal behaviours for public safety and security applications. However, increasingly CCTV installations have also been exploited for gathering and analyzing business intelligence information, in order to enhance marketing and operational efficiency. For example, in retail environments, surveillance cameras can be utilised to collect statistical information about shopping behaviour and preference for marketing (e.g., how many people entered a shop; how many females/males or which age groups of people showed interests to a particular product; how long did they stay in the shop; and what are the frequent paths), and to measure operational efficiency for improving customer experience. Video analytics has the enormous potential for non-security oriented commercial applications. This book presents the latest developments on video analytics for business intelligence applications. It provides both academic and commercial practitioners an understanding of the state-of-the-art and a resource for potential applications and successful practice.
This book introduces Local Binary Patterns (LBP), arguably one of the most powerful texture descriptors, and LBP variants. This volume provides the latest reviews of the literature and a presentation of some of the best LBP variants by researchers at the forefront of textual analysis research and research on LBP descriptors and variants. The value of LBP variants is illustrated with reported experiments using many databases representing a diversity of computer vision applications in medicine, biometrics, and other areas. There is also a chapter that provides an excellent theoretical foundation for texture analysis and LBP in particular. A special section focuses on LBP and LBP variants in the area of face recognition, including thermal face recognition. This book will be of value to anyone already in the field as well as to those interested in learning more about this powerful family of texture descriptors.
The integration of the 3rd dimension in the production of spatial representation is largely recognized as a valuable approach to comprehend our reality, that is 3D. During the last decade developments in 3D Geoinformation (GI) system have made substantial progress. We are about to have a more complete spatial model and understanding of our planet in different scales. Hence, various communities and cities offer 3D landscape and 3D city models as valuable source and instrument for sustainable management of rural and urban resources. Also municipal utilities, real estate companies benefit from recent developments related to 3D applications. In order to present recent developments and to discuss future trends, academics and practitioners met at the 7th International Workshop on 3D Geoinformation. This book comprises a selection of evaluated, high quality papers that were presented at this workshop in May 2012. The topics focus explicitly on the last achievements (methods, algorithms, models, systems) with respect to 3D GeoInformation requirements. The book is aimed at decision makers and experts as well at students interested in the 3D component of geographical information science including GI engineers, computer scientists, photogrammetrists, land surveyors, urban planners, and mapping specialists.
Vision-based control of wheeled mobile robots is an interesting field of research from a scientific and even social point of view due to its potential applicability. This book presents a formal treatment of some aspects of control theory applied to the problem of vision-based pose regulation of wheeled mobile robots. In this problem, the robot has to reach a desired position and orientation, which are specified by a target image. It is faced in such a way that vision and control are unified to achieve stability of the closed loop, a large region of convergence, without local minima and good robustness against parametric uncertainty. Three different control schemes that rely on monocular vision as unique sensor are presented and evaluated experimentally. A common benefit of these approaches is that they are valid for imaging systems obeying approximately a central projection model, e.g., conventional cameras, catadioptric systems and some fisheye cameras. Thus, the presented control schemes are generic approaches. A minimum set of visual measurements, integrated in adequate task functions, are taken from a geometric constraint imposed between corresponding image features. Particularly, the epipolar geometry and the trifocal tensor are exploited since they can be used for generic scenes. A detailed experimental evaluation is presented for each control scheme.
The book describes recent research results in the areas of modelling, creation, management and presentation of interactive 3D multimedia content. The book describes the current state of the art in the field and identifies the most important research and design issues. Consecutive chapters address these issues. These are: database modelling of 3D content, security in 3D environments, describing interactivity of content, searching content, visualization of search results, modelling mixed reality content, and efficient creation of interactive 3D content. Each chapter is illustrated with example applications based on the proposed approach. The final chapter discusses some important ethical issues related to the widespread use of virtual environments in everyday life. The book provides ready to use solutions for many important problems related to the creation of interactive 3D multimedia applications and will be a primary reading for researchers and developers working in this domain.
This book examines paintings using a computational and quantitative approach. Specifically, it compares paintings to photographs, addressing the strengths and limitations of both. Particular aesthetic practices are examined such as the vista, foreground to background organisation and the depth planes. These are analysed using a range of computational approaches and clear observations are made. New generations of image-capture devices such as Google goggles and the light field camera, promise a future in which the formal attributes of a photograph are made available for editing to a degree that has hitherto been the exclusive territory of painting. In this sense paintings and photographs are converging, and it therefore seems an opportune time to study the comparisons between them. In this context, the book includes cutting-edge work examining how some of the aesthetic attributes of a painting can be transferred to a photograph using the latest computational approaches.
For those involved with the design and analysis of electro-optical systems, the book outlines current and future ground, air and spacebourne applications of electro-optical systems. It describes their performance requirements and practical methods of achieving design objectives.
Color perception plays an important role in object recognition and scene understanding both for humans and intelligent vision systems. Recent advances in digital color imaging and computer hardware technology have led to an explosion in the use of color images in a variety of applications including medical imaging, content-based image retrieval, biometrics, watermarking, digital inpainting, remote sensing, visual quality inspection, among many others. As a result, automated processing and analysis of color images has become an active area of research, to which the large number of publications of the past two decades bears witness. The multivariate nature of color image data presents new challenges for researchers and practitioners as the numerous methods developed for single channel images are often not directly applicable to multichannel ones. The goal of this volume is to summarize the state-of-the-art in the early stages of the color image processing pipeline."
This book presents practical optimization techniques used in image processing and computer vision problems. Ill-posed problems are introduced and used as examples to show how each type of problem is related to typical image processing and computer vision problems. Unconstrained optimization gives the best solution based on numerical minimization of a single, scalar-valued objective function or cost function. Unconstrained optimization problems have been intensively studied, and many algorithms and tools have been developed to solve them. Most practical optimization problems, however, arise with a set of constraints. Typical examples of constraints include: (i) pre-specified pixel intensity range, (ii) smoothness or correlation with neighboring information, (iii) existence on a certain contour of lines or curves, and (iv) given statistical or spectral characteristics of the solution. Regularized optimization is a special method used to solve a class of constrained optimization problems. The term regularization refers to the transformation of an objective function with constraints into a different objective function, automatically reflecting constraints in the unconstrained minimization process. Because of its simplicity and efficiency, regularized optimization has many application areas, such as image restoration, image reconstruction, optical flow estimation, etc. Optimization plays a major role in a wide variety of theories for image processing and computer vision. Various optimization techniques are used at different levels for these problems, and this volume summarizes and explains these techniques as applied to image processing and computer vision.
The need of video compression in the modern age of visual communication cannot be over-emphasized. This monograph will provide useful information to the postgraduate students and researchers who wish to work in the domain of VLSI design for video processing applications. In this book, one can find an in-depth discussion of several motion estimation algorithms and their VLSI implementation as conceived and developed by the authors. It records an account of research done involving fast three step search, successive elimination, one-bit transformation and its effective combination with diamond search and dynamic pixel truncation techniques. Two appendices provide a number of instances of proof of concept through Matlab and Verilog program segments. In this aspect, the book can be considered as first of its kind. The architectures have been developed with an eye to their applicability in everyday low-power handheld appliances including video camcorders and smartphones.
Continuing in the footsteps of the pioneering first edition, Signal and Image Processing for Remote Sensing, Second Edition explores the most up-to-date signal and image processing methods for dealing with remote sensing problems. Although most data from satellites are in image form, signal processing can contribute significantly in extracting information from remotely sensed waveforms or time series data. This book combines both, providing a unique balance between the role of signal processing and image processing. Featuring contributions from worldwide experts, this book continues to emphasize mathematical approaches. Not limited to satellite data, it also considers signals and images from hydroacoustic, seismic, microwave, and other sensors. Chapters cover important topics in signal and image processing and discuss techniques for dealing with remote sensing problems. Each chapter offers an introduction to the topic before delving into research results, making the book accessible to a broad audience. This second edition reflects the considerable advances that have occurred in the field, with 23 of 27 chapters being new or entirely rewritten. Coverage includes new mathematical developments such as compressive sensing, empirical mode decomposition, and sparse representation, as well as new component analysis methods such as non-negative matrix and tensor factorization. The book also presents new experimental results on SAR and hyperspectral image processing. The emphasis is on mathematical techniques that will far outlast the rapidly changing sensor, software, and hardware technologies. Written for industrial and academic researchers and graduate students alike, this book helps readers connect the "dots" in image and signal processing. New in This Edition The second edition includes four chapters from the first edition, plus 23 new or entirely rewritten chapters, and 190 new figures. New topics covered include:
The second edition is not intended to replace the first edition entirely and readers are encouraged to read both editions of the book for a more complete picture of signal and image processing in remote sensing. See Signal and Image Processing for Remote Sensing (CRC Press 2006).
Digital Video and HD: Algorithms and Interfaces provides a
one-stop shop for the theory and engineering of digital video
systems. Equally accessible to video engineers and those working in
computer graphics, Charles Poynton s revision to his classic text
covers emergent compression systems, including H.264 and VP8/WebM,
and augments detailed information on JPEG, DVC, and MPEG-2 systems.
This edition also introduces the technical aspects of file-based
workflows and outlines the emerging domain of metadata, placing it
in the context of digital video processing.
A 3D user interface (3DUI) is an interface in which the user performs tasks in three dimensions. For example, interactions using hand/body gestures, interaction using a motion controller (e.g. Sony PlayStation Move), interaction with virtual reality devices using tracked motion controllers, etc. All these technologies which let a user interact in three dimensions are called 3D user interface technologies. These 3D user interfaces have the potential to make games more immersive & engaging and thus potentially provide a better user experience to gamers. Although 3D user interface technologies are available for games, it is unclear how their usage affects game play and if there are any user performance benefits. This book presents state of the art research on exploring 3D user interface technologies for improving video games. It also presents a review of research work done in this area and describes experiments focused on usage of stereoscopic 3D, head tracking, and hand gesture-based control in gaming scenarios. These experiments are systematic studies in gaming environments and are aimed at understanding the effect of the underlined 3D interface technology on the gaming experience of a user. Based on these experiments, several design guidelines are presented which can aid game designers in designing better immersive games.
Since the early 20th century, medical imaging has been dominated by monochrome imaging modalities such as x-ray, computed tomography, ultrasound, and magnetic resonance imaging. As a result, color information has been overlooked in medical image analysis applications. Recently, various medical imaging modalities that involve color information have been introduced. These include cervicography, dermoscopy, fundus photography, gastrointestinal endoscopy, microscopy, and wound photography. However, in comparison to monochrome images, the analysis of color images is a relatively unexplored area. The multivariate nature of color image data presents new challenges for researchers and practitioners as the numerous methods developed for monochrome images are often not directly applicable to multichannel images. The goal of this volume is to summarize the state-of-the-art in the utilization of color information in medical image analysis. |
![]() ![]() You may like...
A Deep Dive into NoSQL Databases: The…
Pethuru Raj, Ganesh Chandra Deka
Hardcover
R4,483
Discovery Miles 44 830
The Verilog (R) Hardware Description…
Donald E. Thomas, Philip R. Moorby
Hardcover
R3,600
Discovery Miles 36 000
Theoretical, Modelling and Numerical…
Samsul Ariffin Abdul Karim
Hardcover
R2,873
Discovery Miles 28 730
JavaScript - Syntax and Practices
Ravi Tomar, Sarishma Dangi
Hardcover
R3,584
Discovery Miles 35 840
Intelligent Autonomy for Unmanned Marine…
Carlos C Insaurralde
Hardcover
|