![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Image processing > General
Arising from the fourth Dagstuhl conference entitled Visualization and Processing of Tensors and Higher Order Descriptors for Multi-Valued Data (2011), this book offers a broad and vivid view of current work in this emerging field. Topics covered range from applications of the analysis of tensor fields to research on their mathematical and analytical properties. Part I, Tensor Data Visualization, surveys techniques for visualization of tensors and tensor fields in engineering, discusses the current state of the art and challenges, and examines tensor invariants and glyph design, including an overview of common glyphs. The second Part, Representation and Processing of Higher-order Descriptors, describes a matrix representation of local phase, outlines mathematical morphological operations techniques, extended for use in vector images, and generalizes erosion to the space of diffusion weighted MRI. Part III, Higher Order Tensors and Riemannian-Finsler Geometry, offers powerful mathematical language to model and analyze large and complex diffusion data such as High Angular Resolution Diffusion Imaging (HARDI) and Diffusion Kurtosis Imaging (DKI). A Part entitled Tensor Signal Processing presents new methods for processing tensor-valued data, including a novel perspective on performing voxel-wise morphometry of diffusion tensor data using kernel-based approach, explores the free-water diffusion model, and reviews proposed approaches for computing fabric tensors, emphasizing trabecular bone research. The last Part, Applications of Tensor Processing, discusses metric and curvature tensors, two of the most studied tensors in geometry processing. Also covered is a technique for diagnostic prediction of first-episode schizophrenia patients based on brain diffusion MRI data. The last chapter presents an interactive system integrating the visual analysis of diffusion MRI tractography with data from electroencephalography.
This book provides an introduction to fuzzy logic approaches useful in image processing. The authors start by introducing image processing tasks of low and medium level such as thresholding, enhancement, edge detection, morphological filters, and segmentation and shows how fuzzy logic approaches apply. The book is divided into two parts. The first includes vagueness and ambiguity in digital images, fuzzy image processing, fuzzy rule based systems, and fuzzy clustering. The second part includes applications to image processing, image thresholding, color contrast enhancement, edge detection, morphological analysis, and image segmentation. Throughout, they describe image processing algorithms based on fuzzy logic under methodological aspects in addition to applicative aspects. Implementations in java are provided for the various applications.
Learn concepts central to visual special effects using the free Black Magic Design Fusion 8.0 software package. This book also provides foundational background information regarding concepts central to digital image compositing, digital video editing, digital illustration, digital painting, 3D, and digital audio in the first six chapters on new media theory, concepts and terminology. This book builds on the foundational concepts of digital image compositing, digital audio, digital video, digital illustration and digital painting. VFX Fundamentals introduces more advanced VFX concepts and pipelines as the chapters progress, covering topics such as flow node compositing, timeline animation, animated polyline masking, bluescreen and greenscreen matte pulling (generation), using Primatte and Fusion 8 Ultra Keyer, motion tracking, 3D rendering and compositing, auxiliary channels, and particle systems and particle physics dynamics, among other topics. < What You'll Learn See the new media components (raster, vector, audio, video, rendering) needed for VFX Discover the concepts behind the VFX content production workflow Install and utilize Black Magic Design Fusion 8 and its Visual Programming Language Master the concepts behind resolution, aspect ratio, bit-rate, color depth, layers, alpha, and masking Work with 2D VFX concepts such as animated masking, matte pulling (Primatte V) and motion tracking Harness 3D VFX concepts such as 3D geometry, materials, lighting, animation and auxiliary channels Use advanced VFX concepts such as particle systems animation using real-world physics (forces) Who This Book Is For SFX artists, VFX artists, video editors, website developers, filmmakers, 2D and 3D animators, digital signage producers, e-learning content creators, game developers, multimedia producers.
This four-volume set (CCIS 643, 644, 645, 646) constitutes the refereed proceedings of the 16th Asia Simulation Conference and the First Autumn Simulation Multi-Conference, AsiaSim / SCS AutumnSim 2016, held in Beijing, China, in October 2016. The 265 revised full papers presented were carefully reviewed and selected from 651 submissions. The papers in this third volume of the set are organized in topical sections on Cloud technologies in simulation applications; fractional calculus with applications and simulations; modeling and simulation for energy, environment and climate; SBA virtual prototyping engineering technology; simulation and Big Data.
This book constitutes the refereed proceedings of the 21st Annual Conference on Medical Image Understanding and Analysis, MIUA 2017, held in Edinburgh, UK, in July 2017. The 82 revised full papers presented were carefully reviewed and selected from 105 submissions. The papers are organized in topical sections on retinal imaging, ultrasound imaging, cardiovascular imaging, oncology imaging, mammography image analysis, image enhancement and alignment, modeling and segmentation of preclinical, body and histological imaging, feature detection and classification. The chapters 'Model-Based Correction of Segmentation Errors in Digitised Histological Images' and 'Unsupervised Superpixel-Based Segmentation of Histopathological Images with Consensus Clustering' are open access under a CC BY 4.0 license.
This, the 29th issue of the Transactions on Computational Science journal, is comprised of seven full papers focusing on the area of secure communication. Topics covered include weak radio signals, efficient circuits, multiple antenna sensing techniques, modes of inter-computer communication and fault types, geometric meshes, and big data processing in distributed environments.
Volume 3 of the second edition of the fully revised and updated Digital Signal and Image Processing using MATLAB, after first two volumes on the "Fundamentals" and "Advances and Applications: The Deterministic Case", focuses on the stochastic case. It will be of particular benefit to readers who already possess a good knowledge of MATLAB, a command of the fundamental elements of digital signal processing and who are familiar with both the fundamentals of continuous-spectrum spectral analysis and who have a certain mathematical knowledge concerning Hilbert spaces. This volume is focused on applications, but it also provides a good presentation of the principles. A number of elements closer in nature to statistics than to signal processing itself are widely discussed. This choice comes from a current tendency of signal processing to use techniques from this field. More than 200 programs and functions are provided in the MATLAB language, with useful comments and guidance, to enable numerical experiments to be carried out, thus allowing readers to develop a deeper understanding of both the theoretical and practical aspects of this subject.
This book presents established and new approaches to perform calculations of electrostatic interactions at the nanoscale, with particular focus on molecular biology applications. It is based on the proceedings of the Computational Electrostatics for Biological Applications international meeting, which brought together researchers in computational disciplines to discuss and explore diverse methods to improve electrostatic calculations. Fostering an interdisciplinary approach to the description of complex physical and biological problems, this book encompasses contributions originating in the fields of geometry processing, shape modeling, applied mathematics, and computational biology and chemistry. The main topics covered are theoretical and numerical aspects of the solution of the Poisson-Boltzmann equation, surveys and comparison among geometric approaches to the modelling of molecular surfaces and related discretization and computational issues. It also includes a number of contributions addressing applications in biology, biophysics and nanotechnology. The book is primarily intended as a reference for researchers in the computational molecular biology and chemistry fields. As such, it also aims at becoming a key source of information for a wide range of scientists who need to know how modeling and computing at the molecular level may influence the design and interpretation of their experiments.
This book represents the refereed proceedings of the Tenth International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of New South Wales (Australia) in February 2012. These biennial conferences are major events for Monte Carlo and the premiere event for quasi-Monte Carlo research. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. The reader will be provided with information on latest developments in these very active areas. The book is an excellent reference for theoreticians and practitioners interested in solving high-dimensional computational problems arising, in particular, in finance, statistics and computer graphics.
Outlier-contaminated data is a fact of life in computer vision. For computer vision applications to perform reliably and accurately in practical settings, the processing of the input data must be conducted in a robust manner. In this context, the maximum consensus robust criterion plays a critical role by allowing the quantity of interest to be estimated from noisy and outlier-prone visual measurements. The maximum consensus problem refers to the problem of optimizing the quantity of interest according to the maximum consensus criterion. This book provides an overview of the algorithms for performing this optimization. The emphasis is on the basic operation or "inner workings" of the algorithms, and on their mathematical characteristics in terms of optimality and efficiency. The applicability of the techniques to common computer vision tasks is also highlighted. By collecting existing techniques in a single article, this book aims to trigger further developments in this theoretically interesting and practically important area.
R is a powerful and free software system for data analysis and graphics, with over 5,000 add-on packages available. This book introduces R using SAS and SPSS terms with which you are already familiar. It demonstrates which of the add-on packages are most like SAS and SPSS and compares them to R's built-in functions. It steps through over 30 programs written in all three packages, comparing and contrasting the packages' differing approaches. The programs and practice datasets are available for download. The glossary defines over 50 R terms using SAS/SPSS jargon and again using R jargon. The table of contents and the index allow you to find equivalent R functions by looking up both SAS statements and SPSS commands. When finished, you will be able to import data, manage and transform it, create publication quality graphics, and perform basic statistical analyses. This new edition has updated programming, an expanded index, and even more statistical methods covered in over 25 new sections.
This book provides a timely and unique survey of next-generation social computational methodologies. The text explains the fundamentals of this field, and describes state-of-the-art methods for inferring social status, relationships, preferences, intentions, personalities, needs, and lifestyles from human information in unconstrained visual data. Topics and features: includes perspectives from an international and interdisciplinary selection of pre-eminent authorities; presents balanced coverage of both detailed theoretical analysis and real-world applications; examines social relationships in human-centered media for the development of socially-aware video, location-based, and multimedia applications; reviews techniques for recognizing the social roles played by people in an event, and for classifying human-object interaction activities; discusses the prediction and recognition of human attributes via social media analytics, including social relationships, facial age and beauty, and occupation.
Intelligent multimedia surveillance concerns the analysis of multiple sensing inputs including video and audio streams, radio-frequency identification (RFID), and depth data. These data are processed for the automated detection and tracking of people, vehicles, and other objects. The goal is to locate moving targets, to understand their behavior, and to detect suspicious or abnormal activities for crime prevention. Despite its benefits, there is societal apprehension regarding the use of such technology, so an important challenge in this research area is to balance public safety and privacy. This edited book presents recent findings in the field of intelligent multimedia surveillance emerging from disciplines such as multimedia computing, computer vision, and artificial intelligence. It consists of nine chapters addressing intelligent video surveillance, video analysis of crowds, privacy issues in intelligent multimedia surveillance, RFID technology for localization of objects, object tracking using visual saliency information, estimating multiresolution depth using active stereo vision, and performance evaluation for video surveillance systems. The book will be of value to researchers and practitioners working on related problems in security, multimedia, and artificial intelligence.
This volume contains the proceedings from two closely related workshops: Computational Diffusion MRI (CDMRI'13) and Mathematical Methods from Brain Connectivity (MMBC'13), held under the auspices of the 16th International Conference on Medical Image Computing and Computer Assisted Intervention, which took place in Nagoya, Japan, September 2013. Inside, readers will find contributions ranging from mathematical foundations and novel methods for the validation of inferring large-scale connectivity from neuroimaging data to the statistical analysis of the data, accelerated methods for data acquisition, and the most recent developments on mathematical diffusion modeling. This volume offers a valuable starting point for anyone interested in learning computational diffusion MRI and mathematical methods for brain connectivity as well as offers new perspectives and insights on current research challenges for those currently in the field. It will be of interest to researchers and practitioners in computer science, MR physics, and applied mathematics.
Digital Imaging targets anyone with an interest in digital imaging, professional or private, who uses even quite modest equipment such as a PC, digital camera and scanner, a graphics editor such as PAINT, and an inkjet printer. Uniquely, it is intended to fill the gap between the highly technical texts for academics (with access to expensive equipment), and the superficial introductions for amateurs. The four-part treatment spans theory, technology, programs and practice. Theory covers integer arithmetic, additive and subtractive color, greyscales, computational geometry, and a new presentation of discrete Fourier analysis; Technology considers bitmap file structures, scanners, digital cameras, graphic editors, and inkjet printers; Programs develops several processing tools for use in conjunction with a standard Paint graphics editor and supplementary processing tools; Practice discusses 1-bit, greyscale, 4-bit, 8-bit, and 24-bit images for the practice section. Relevant QBASIC code is supplied an accompanying CD and algorithms are listed in the appendix. Readers can attain a level of understanding and the practical insights to obtain optimal use and satisfaction from even the most basic digital-imaging equipment.
Tensor signal processing is an emerging field with important applications to computer vision and image processing. This book presents the state of the art in this new branch of signal processing, offering a great deal of research and discussions by leading experts in the area. The wide-ranging volume offers an overview into cutting-edge research into the newest tensor processing techniques and their application to different domains related to computer vision and image processing. This comprehensive text will prove to be an invaluable reference and resource for researchers, practitioners and advanced students working in the area of computer vision and image processing.
The purpose of this volume is to present current work of the Intelligent Computer Graphics community, a community growing up year after year. This volume is a kind of continuation of the previously published Springer volume "Artificial Int- ligence Techniques for Computer Graphics". Nowadays, intelligent techniques are more and more used in Computer Graphics in order, not only to optimise the pr- essing time, but also to find more accurate solutions for a lot of Computer Gra- ics problems, than with traditional methods. What are intelligent techniques for Computer Graphics? Mainly, they are te- niques based on Artificial Intelligence. So, problem resolution (especially constraint satisfaction) techniques, as well as evolutionary techniques, are used in Declarative scene Modelling; heuristic search techniques, as well as strategy games techniques, are currently used in scene understanding and in virtual world exploration; multi-agent techniques and evolutionary algorithms are used in behavioural animation; and so on. However, even if in most cases the used intelligent techniques are due to Artificial - telligence, sometimes, simple human intelligence can find interesting solutions in cases where traditional Computer Graphics techniques, even combined with Artificial Intelligence ones, cannot propose any satisfactory solution. A good example of such a case is the one of scene understanding, in the case where several parts of the scene are impossible to access.
This book shows how to look at ways of visualizing large datasets, whether large in numbers of cases, or large in numbers of variables, or large in both. All ideas are illustrated with displays from analyses of real datasets and the importance of interpreting displays effectively is emphasized. Graphics should be drawn to convey information and the book includes many insightful examples. New approaches to graphics are needed to visualize the information in large datasets and most of the innovations described in this book are developments of standard graphics. The book is accessible to readers with some experience of drawing statistical graphics.
For some time, medicine has been an important driver for the development of data processing and visualization techniques. Improved technology offers the capacity to generate larger and more complex data sets related to imaging and simulation. This, in turn, creates the need for more effective visualization tools for medical practitioners to interpret and utilize data in meaningful ways. The first edition of Visualization in Medicine and Life Sciences (VMLS) emerged from a workshop convened to explore the significant data visualization challenges created by emerging technologies in the life sciences. The workshop and the book addressed questions of whether medical data visualization approaches can be devised or improved to meet these challenges, with the promise of ultimately being adopted by medical experts. Visualization in Medicine and Life Sciences II follows the second international VMLS workshop, held in Bremerhaven, Germany, in July 2009. Internationally renowned experts from the visualization and driving application areas came together for this second workshop. The book presents peer-reviewed research and survey papers which document and discuss the progress made, explore new approaches to data visualization, and assess new challenges and research directions.
This book constitutes the refereed proceedings of the 11th Chinese Conference on Image and Graphics Technologies and Applications, IGTA 2016, held in Beijing, China in July 2016. The 27 papers presented were carefully reviewed and selected from 69 submissions. They provide a forum for sharing progresses in the areas of image processing technology; image analysis and understanding; computer vision and pattern recognition; big data mining, computer graphics and VR; as well as image technology applications.
This BCAM SpringerBriefs is a treaty of the Infinity-Laplace Equation, which has inherited many features from the ordinary Laplace Equation, and is based on lectures by the author. The Infinity-Laplace Equation has delightful counterparts to the Dirichlet integral, the mean value property, the Brownian motion, Harnack's inequality, and so on. This "fully non-linear" equation has applications to image processing and to mass transfer problems, and it provides optimal Lipschitz extensions of boundary values.
This book describes the design, development, and testing of a novel digital watermarking technique for color images using Magic Square and Ridgelet transforms. The novel feature of the method is that it generates and uses multiple copies of the digital watermark. The book describes how the method was tested for embedding digital watermarks into color cover images, resulting in very high PSNR value and yielding comparable results with existing watermarking techniques.To reach this new method, eight different techniques are designed, developed and tested. First, the authors test two digital watermarking techniques based on encryption: Image Watermark Using Complete Complementary Code Technique (CCCT) and Image Watermarking Using CCC-Fast Walsh Hadamard Transform Technique (CCC-FWHTT). Next, four digital watermarking techniques based on curvelet transforms are discussed: Image Watermarking Using Curvelet Transform (WCT), Watermark Wavelets in Curvelets of Cover Image (WWCT), Resized Watermark into Curvelets of Cover Image (RWCT), and Resized Watermark Wavelets into Curvelets of Cover Image (RWWCT). Then, two final techniques are presented: Image Watermarking Based on Magic Square (MST) and Image watermarking based on Magic Square and Ridgelet Transform (MSRTT). Future research directions are explored in the final chapter.Designed for professionals and researchers in computer graphics and imaging, Digital Watermarking Techniques in Curvelet and Ridgelet Domain is also a useful tool for advanced-level students.
Foundations of Digital Art and Design, Second Edition Fuses design fundamentals and software training into one cohesive approach! All students of digital design and production-whether learning in a classroom or on their own-need to understand the basic principles of design. These principles are often excluded from books that teach software. Foundations of Digital Art and Design reinvigorates software training by integrating design exercises into tutorials that fuse design fundamentals and core Adobe Creative Cloud skills. The result is a comprehensive design learning experience organized into five sections that focus on vector art, photography, image manipulation, typography, and effective work habits for digital artists. Design topics and principles include: Bits, Dots, Lines, Shapes, Unity, Rule of Thirds, Zone System, Color Models, Collage, Appropriation, Gestalt, The Bauhaus Basic Course Approach, Continuity, Automation, and Revision. This book: Teaches art and design principles with references to contemporary digital art alongside digital tools and processes in Adobe Creative Cloud Addresses the growing trend of compressing design fundamentals and design software into the same course in universities and design colleges Times each lesson to be used in 50 to 90-minute class sessions with additional practice materials available online Includes free video screencasts that demonstrate key concepts in every chapter Download work files and bonus chapters, view screencasts, connect with the author online and more; see the Introduction to the book for details. "This ambitious book teaches visual thinking and software skills together. The text leads readers step-by-step through the process of creating dynamic images using a range of powerful applications. The engaging, experimental exercises take this project well beyond the typical software guide." ELLEN LUPTON, co-author of Graphic Design: The New Basics
Modeling data from visual and linguistic modalities together creates opportunities for better understanding of both, and supports many useful applications. Examples of dual visual-linguistic data includes images with keywords, video with narrative, and figures in documents. We consider two key task-driven themes: translating from one modality to another (e.g., inferring annotations for images) and understanding the data using all modalities, where one modality can help disambiguate information in another. The multiple modalities can either be essentially semantically redundant (e.g., keywords provided by a person looking at the image), or largely complementary (e.g., meta data such as the camera used). Redundancy and complementarity are two endpoints of a scale, and we observe that good performance on translation requires some redundancy, and that joint inference is most useful where some information is complementary. Computational methods discussed are broadly organized into ones for simple keywords, ones going beyond keywords toward natural language, and ones considering sequential aspects of natural language. Methods for keywords are further organized based on localization of semantics, going from words about the scene taken as whole, to words that apply to specific parts of the scene, to relationships between parts. Methods going beyond keywords are organized by the linguistic roles that are learned, exploited, or generated. These include proper nouns, adjectives, spatial and comparative prepositions, and verbs. More recent developments in dealing with sequential structure include automated captioning of scenes and video, alignment of video and text, and automated answering of questions about scenes depicted in images.
Because circular objects are projected to ellipses in images, ellipse fitting is a first step for 3-D analysis of circular objects in computer vision applications. For this reason, the study of ellipse fitting began as soon as computers came into use for image analysis in the 1970s, but it is only recently that optimal computation techniques based on the statistical properties of noise were established. These include renormalization (1993), which was then improved as FNS (2000) and HEIV (2000). Later, further improvements, called hyperaccurate correction (2006), HyperLS (2009), and hyper-renormalization (2012), were presented. Today, these are regarded as the most accurate fitting methods among all known techniques. This book describes these algorithms as well implementation details and applications to 3-D scene analysis. We also present general mathematical theories of statistical optimization underlying all ellipse fitting algorithms, including rigorous covariance and bias analyses and the theoretical accuracy limit. The results can be directly applied to other computer vision tasks including computing fundamental matrices and homographies between images. This book can serve not simply as a reference of ellipse fitting algorithms for researchers, but also as learning material for beginners who want to start computer vision research. The sample program codes are downloadable from the website: https://sites.google.com/a/morganclaypool.com/ellipse-fitting-for-computer-vision-implementation-and-applications. |
![]() ![]() You may like...
Membrane Computing for Distributed…
Andrei George Florea, Catalin Buiu
Hardcover
R4,546
Discovery Miles 45 460
Graph Data Management - Fundamental…
George Fletcher, Jan Hidders, …
Hardcover
R1,601
Discovery Miles 16 010
Software Engineering Research…
Roger Lee, Naohiro Ishii
Hardcover
Shepherding UxVs for Human-Swarm Teaming…
Hussein A. Abbass, Robert A. Hunjet
Hardcover
R5,916
Discovery Miles 59 160
Autonomous Robotic Systems - Soft…
Changjiu Zhou, Dario Maravall, …
Hardcover
R4,693
Discovery Miles 46 930
Informatics in Control, Automation and…
Joaquim Filipe, Kurosh Madani, …
Hardcover
R6,989
Discovery Miles 69 890
Research Anthology on Blockchain…
Information Reso Management Association
Hardcover
R11,175
Discovery Miles 111 750
|