![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Image processing
Video Object Extraction and Representation: Theory and Applications is an essential reference for electrical engineers working in video; computer scientists researching or building multimedia databases; video system designers; students of video processing; video technicians; and designers working in the graphic arts. In the coming years, the explosion of computer technology will enable a new form of digital media. Along with broadband Internet access and MPEG standards, this new media requires a computational infrastructure to allow users to grab and manipulate content. The book reviews relevant technologies and standards for content-based processing and their interrelations. Within this overview, the book focuses upon two problems at the heart of the algorithmic/computational infrastructure: video object extraction, or how to automatically package raw visual information by content; and video object representation, or how to automatically index and catalogue extracted content for browsing and retrieval. The book analyzes the designs of two novel, working systems for content-based extraction and representation in the support of MPEG-4 and MPEG-7 video standards, respectively. Features of the book include: Overview of MPEG standards; A working system for automatic video object segmentation; A working system for video object query by shape; Novel technology for a wide range of recognition problems; Overview of neural network and vision technologies Video Object Extraction and Representation: Theory and Applications will be of interest to research scientists and practitioners working in fields related to the topic. It may also be used as an advanced-level graduate text.
Super-Resolution Imaging serves as an essential reference for both academicians and practicing engineers. It can be used both as a text for advanced courses in imaging and as a desk reference for those working in multimedia, electrical engineering, computer science, and mathematics. The first book to cover the new research area of super-resolution imaging, this text includes work on the following groundbreaking topics: Image zooming based on wavelets and generalized interpolation; Super-resolution from sub-pixel shifts; Use of blur as a cue; Use of warping in super-resolution; Resolution enhancement using multiple apertures; Super-resolution from motion data; Super-resolution from compressed video; Limits in super-resolution imaging. Written by the leading experts in the field, Super-Resolution Imaging presents a comprehensive analysis of current technology, along with new research findings and directions for future work.
The recent emergence of Local Binary Patterns (LBP) has led to significant progress in applying texture methods to various computer vision problems and applications. The focus of this research has broadened from 2D textures to 3D textures and spatiotemporal (dynamic) textures. Also, where texture was once utilized for applications such as remote sensing, industrial inspection and biomedical image analysis, the introduction of LBP-based approaches have provided outstanding results in problems relating to face and activity analysis, with future scope for face and facial expression recognition, biometrics, visual surveillance and video analysis. "Computer Vision Using Local Binary Patterns" provides a detailed description of the LBP methods and their variants both in spatial and spatiotemporal domains. This comprehensive reference also provides an excellent overview as to how texture methods can be utilized for solving different kinds of computer vision and image analysis problems. Source codes of the basic LBP algorithms, demonstrations, some databases and a comprehensive LBP bibliography can be found from an accompanying web site. Topics include: local binary patterns and their variants in spatial and spatiotemporal domains, texture classification and segmentation, description of interest regions, applications in image retrieval and 3D recognition - Recognition and segmentation of dynamic textures, background subtraction, recognition of actions, face analysis using still images and image sequences, visual speech recognition and LBP in various applications. Written by pioneers of LBP, this book is an essential resource for researchers, professional engineers and graduate students in computer vision, image analysis and pattern recognition. The book will also be of interest to all those who work with specific applications of machine vision.
ICMCCA 2012 is the first International Conference on Multimedia Processing, Communication and Computing Applications and the theme of the Conference is chosen as 'Multimedia Processing and its Applications'. Multimedia processing has been an active research area contributing in many frontiers of today's science and technology. This book presents peer-reviewed quality papers on multimedia processing, which covers a very broad area of science and technology. The prime objective of the book is to familiarize readers with the latest scientific developments that are taking place in various fields of multimedia processing and is widely used in many disciplines such as Medical Diagnosis, Digital Forensic, Object Recognition, Image and Video Analysis, Robotics, Military, Automotive Industries, Surveillance and Security, Quality Inspection, etc. The book will assist the research community to get the insight of the overlapping works which are being carried out across the globe at many medical hospitals and institutions, defense labs, forensic labs, academic institutions, IT companies and security & surveillance domains. It also discusses latest state-of-the-art research problems and techniques and helps to encourage, motivate and introduce the budding researchers to a larger domain of multimedia.
Basics of Game Design is for anyone wanting to become a professional game designer. Focusing on creating the game mechanics for data-driven games, it covers role-playing, real-time strategy, first-person shooter, simulation, and other games. Written by a 25-year veteran of the game industry, the guide offers detailed explanations of how to design the data sets used to resolve game play for moving, combat, solving puzzles, interacting with NPCs, managing inventory, and much more. Advice on developing stories for games, building maps and levels, and designing the graphical user interface is also included.
Nature-inspired algorithms such as cuckoo search and firefly algorithm have become popular and widely used in recent years in many applications. These algorithms are flexible, efficient and easy to implement. New progress has been made in the last few years, and it is timely to summarize the latest developments of cuckoo search and firefly algorithm and their diverse applications. This book will review both theoretical studies and applications with detailed algorithm analysis, implementation and case studies so that readers can benefit most from this book. Application topics are contributed by many leading experts in the field. Topics include cuckoo search, firefly algorithm, algorithm analysis, feature selection, image processing, travelling salesman problem, neural network, GPU optimization, scheduling, queuing, multi-objective manufacturing optimization, semantic web service, shape optimization, and others. This book can serve as an ideal reference for both graduates and researchers in computer science, evolutionary computing, machine learning, computational intelligence, and optimization, as well as engineers in business intelligence, knowledge management and information technology. "
Quick sketching is the best technique you can use to stay finely tuned and to keep those creative juices flowing. To keep your sense of observation heightened, and to sharpen your hand-eye coordination, an animator needs to constantly draw and sketch. Quick Sketching with Ron Husband offers instruction to quick sketching and all its techniques. From observing positive and negative space and learning to recognize simple shapes in complex forms to action analysis and using line of action, this Disney legend teaches you how to sketch using all these components, and how to do it in a matter of seconds. On top of instruction and advice, you'll also see Ron's portfolio of select art representing his growth as an artist throughout the years. Watch his drawings as he grows from a young, talented artist, to a true Disney animator. Follow him as he goes around the world and sketches flamenco dancers, football players, bakers, joggers, lions, tigers, anyone, and anything. As if instruction and inspiration in one place weren't enough, you'll find a sketchbook included, so you can flip from Ron's techniques and work on perfecting basic shapes. Or take your book on the road, read Ron's advice, sketch away, capture the world around you.
'Subdivision' is a way of representing smooth shapes in a computer. A curve or surface (both of which contain an in?nite number of points) is described in terms of two objects. One object is a sequence of vertices, which we visualise as a polygon, for curves, or a network of vertices, which we visualise by drawing the edges or faces of the network, for surfaces. The other object is a set of rules for making denser sequences or networks. When applied repeatedly, the denser and denser sequences are claimed to converge to a limit, which is the curve or surface that we want to represent. This book focusses on curves, because the theory for that is complete enough that a book claiming that our understanding is complete is exactly what is needed to stimulate research proving that claim wrong. Also because there are already a number of good books on subdivision surfaces. The way in which the limit curve relates to the polygon, and a lot of interesting properties of the limit curve, depend on the set of rules, and this book is about how one can deduce those properties from the set of rules, and how one can then use that understanding to construct rules which give the properties that one wants.
In recent years there has been an increasing interest in Second Generation Image and Video Coding Techniques. These techniques introduce new concepts from image analysis that greatly improve the performance of the coding schemes for very high compression. This interest has been further emphasized by the future MPEG 4 standard. Second generation image and video coding techniques are the ensemble of approaches proposing new and more efficient image representations than the conventional canonical form. As a consequence, the human visual system becomes a fundamental part of the encoding/decoding chain. More insight to distinguish between first and second generation can be gained if it is noticed that image and video coding is basically carried out in two steps. First, image data are converted into a sequence of messages and, second, code words are assigned to the messages. Methods of the first generation put the emphasis on the second step, whereas methods of the second generation put it on the first step and use available results for the second step. As a result of including the human visual system, second generation can be also seen as an approach of seeing the image composed by different entities called objects. This implies that the image or sequence of images have first to be analyzed and/or segmented in order to find the entities. It is in this context that we have selected in this book three main approaches as second generation video coding techniques: Segmentation-based schemes Model Based Schemes Fractal Based Schemes GBP/LISTGBP Video Coding: The Second Generation Approach is an important introduction to the new coding techniques for video. As such, all researchers, students and practitioners working in image processing will find this book of interest.
Bringing together key researchers in disciplines ranging from visualization and image processing to applications in structural mechanics, fluid dynamics, elastography, and numerical mathematics, the workshop that generated this edited volume was the third in the successful Dagstuhl series. Its aim, reflected in the quality and relevance of the papers presented, was to foster collaboration and fresh lines of inquiry in the analysis and visualization of tensor fields, which offer a concise model for numerous physical phenomena. Despite their utility, there remains a dearth of methods for studying all but the simplest ones, a shortage the workshops aim to address. Documenting the latest progress and open research questions in tensor field analysis, the chapters reflect the excitement and inspiration generated by this latest Dagstuhl workshop, held in July 2009. The topics they address range from applications of the analysis of tensor fields to purer research into their mathematical and analytical properties. They show how cooperation and the sharing of ideas and data between those engaged in pure and applied research can open new vistas in the study of tensor fields. "
Single frame film-making has been around as long as film itself. It is the ancestor to modern day special effects and animation. Despite its age-old practice, Single frame film making and stop-motion animation continues to influence media and culture with its magic. Current advances in technology and classic stop motion techniques, such as pixilation, time-lapse photography and down shooting have combined to form exciting new approaches. Tom Gasek's Frame-By-Frame Stop Motion offers hands-on experience and various tricks, tips, and exercises to help strengthen skills and produce effective results. Interviews from experts in the field offer not only offer inspiration but also help readers learn how to apply skills and new applications. The companion website offers further instruction, recommended films, tools and resources for the both the novice and the expert. Key Features Features interviews with industry experts that offer inspiration and insight as well as detailed explanations of the inner workings of non-traditional stop motion techniques, processes, and workflows Applies professional stop motion techniques that have been taught and refined in the classroom and applied to leading stop motion films, exhibiting at South By Southwest, Cannes, and more Explores the stop motion opportunities beyond model rigs and puppetry. Re-visualizes stop motion character movements, build downshooter rigs, and configures digital workflows with After Effect tutorials while creating dynamic, creative and inspired stop motion films Offers new coverage of smart phones and their application in stop motion Covers motion control, Dragon Frame, evolution of timelapse, expanded light painting, DSLR cameras, and more
Accurate Visual Metrology from Single and Multiple Uncalibrated
Images presents novel techniques for constructing three-dimensional
models from bi-dimensional images using virtual reality tools.
Antonio Criminisi develops the mathematical theory of computing
world measurements from single images, and builds up a hierarchy of
novel, flexible techniques to make measurements and reconstruct
three-dimensional scenes from uncalibrated images, paying
particular attention to the accuracy of the reconstruction.
Essential background reading for engineers and scientists working in such fields as communications, control, signal, and image processing, radar and sonar, radio astronomy, seismology, remote sensing, and instrumentation. The book can be used as a textbook for a single course, as well as a combination of an introductory and an advanced course, or even for two separate courses, one in signal detection, the other in estimation.
Learn how and when to apply the latest phase and phase-difference modulation (PDM) techniques with this valuable guide for systems engineers and researchers. It helps you cut design time and fine-tune system performance.
One of the challenges facing professionals working in computer animation is keeping abreast of the latest developments and future trends - some of which are determined by industry where the state-of-the-art is continuously being re-defined by the latest computer-generated film special effects, while others arise from research projects whose results are quickly taken on board by programmers and animators working in industry. This handbook will be an invaluable toolkit for programmers, technical directors and professionals working in computer animation. A wide range of topics are covered including: * Computer games * Evolutionary algorithms * Shooting and live action * Digital effects * Cubic curves and surfaces * Subdivision surfaces * Rendering and shading Written by a team of experienced practitioners, each chapter provides a clear and precise overview of each area, reflecting the dynamic and fast-moving field of computer animation. This is a complete and up-to-date reference book on the state-of-the-art techniques used in computer animation.
The problem of structure and motion recovery from image sequences is an important theme in computer vision. Considerable progress has been made in this field during the past two decades, resulting in successful applications in robot navigation, augmented reality, industrial inspection, medical image analysis, and digital entertainment, among other areas. However, many of these methods work only for rigid objects and static scenes. The study of non-rigid structure from motion is not only of academic significance, but also has important practical applications in real-world, nonrigid or dynamic scenarios, such as human facial expressions and moving vehicles. This practical guide/reference provides a comprehensive overview of Euclidean structure and motion recovery, with a specific focus on factorization-based algorithms. The book discusses the latest research in this field, including the extension of the factorization algorithm to recover the structure of non-rigid objects, and presents some new algorithms developed by the authors. Readers require no significant knowledge of computer vision, although some background on projective geometry and matrix computation would be beneficial. Topics and features: presents the first systematic study of structure and motion recovery of both rigid and non-rigid objects from images sequences; discusses in depth the theory, techniques, and applications of rigid and non-rigid factorization methods in three dimensional computer vision; examines numerous factorization algorithms, covering affine, perspective and quasi-perspective projection models; provides appendices describing the mathematical principles behind projective geometry, matrix decomposition, least squares, and nonlinear estimation techniques; includes chapter-ending review questions, and a glossary of terms used in the book. This unique text offers practical guidance in real applications and implementations of 3D modeling systems for practitioners in computer vision and pattern recognition, as well as serving as an invaluable source of new algorithms and methodologies for structure and motion recovery for graduate students and researchers.
Appendices 133 A Mathematical Results 133 A.1 Singularities of the Displacement Error Covariance Matrix 133 A.2 A Class of Matrices and their Eigenvalues 134 A.3 Inverse of the Power Spectral Density Matrix 134 A.4 Power Spectral Density of a Frame 136 Glossary 137 References 141 Index 159 Preface This book aims to capture recent advances in motion compensation for - ficient video compression. It investigates linearly combined motion comp- sated signals and generalizes the well known superposition for bidirectional prediction in B-pictures. The number of superimposed signals and the sel- tion of reference pictures will be important aspects of the discussion. The application oriented part of the book employs this concept to the well known ITU-T Recommendation H.263 and continues with the improvements by superimposed motion-compensated signals for the emerging ITU-T R- ommendation H.264 and ISO/IEC MPEG-4 (Part 10). In addition, it discusses a new approach for wavelet-based video coding. This technology is currently investigated by MPEG to develop a new video compression standard for the mid-term future.
This book presents the main concepts in handling digital images of mixed content, traditionally referenced as mixed raster content (MRC), in two main parts. The first includes introductory chapters covering the scientific and technical background aspects, whereas the second presents a set of research and development approaches to tackle key issues in MRC segmentation, compression and transmission. The book starts with a review of color theory and the mechanism of color vision in humans. In turn, the second chapter reviews data coding and compression methods so as to set the background and demonstrate the complexity involved in dealing with MRC. Chapter three addresses the segmentation of images through an extensive literature review, which highlights the various approaches used to tackle MRC segmentation. The second part of the book focuses on the segmentation of color images for optimized compression, including multi-layered decomposition and representation of MRC and the processes that can be employed to optimize the coding rates of those different layers. Rounding out the coverage, the final chapter examines the segmentation of color images for optimized transmission.
Image Technology Design: A Perceptual Approach is an essential
reference for both academic and professional researchers in the
fields of image technology, image processing and coding, image
display, and image quality. It bridges the gap between academic
research on visual perception and image quality and applications of
such research in the design of imaging systems.
Any task that involves decision-making can benefit from soft computing techniques which allow premature decisions to be deferred. The processing and analysis of images is no exception to this rule. In the classical image analysis paradigm, the first step is nearly always some sort of segmentation process in which the image is divided into (hopefully, meaningful) parts. It was pointed out nearly 30 years ago by Prewitt (1] that the decisions involved in image segmentation could be postponed by regarding the image parts as fuzzy, rather than crisp, subsets of the image. It was also realized very early that many basic properties of and operations on image subsets could be extended to fuzzy subsets; for example, the classic paper on fuzzy sets by Zadeh [2] discussed the "set algebra" of fuzzy sets (using sup for union and inf for intersection), and extended the defmition of convexity to fuzzy sets. These and similar ideas allowed many of the methods of image analysis to be generalized to fuzzy image parts. For are cent review on geometric description of fuzzy sets see, e. g. , [3]. Fuzzy methods are also valuable in image processing and coding, where learning processes can be important in choosing the parameters of filters, quantizers, etc.
This book presents advances in biomedical imaging analysis and processing techniques using time dependent medical image datasets for computer aided diagnosis. The analysis of time-series images is one of the most widely appearing problems in science, engineering, and business. In recent years this problem has gained importance due to the increasing availability of more sensitive sensors in science and engineering and due to the wide-spread use of computers in corporations which have increased the amount of time-series data collected by many magnitudes. An important feature of this book is the exploration of different approaches to handle and identify time dependent biomedical images. Biomedical imaging analysis and processing techniques deal with the interaction between all forms of radiation and biological molecules, cells or tissues, to visualize small particles and opaque objects, and to achieve the recognition of biomedical patterns. These are topics of great importance to biomedical science, biology, and medicine. Biomedical imaging analysis techniques can be applied in many different areas to solve existing problems. The various requirements arising from the process of resolving practical problems motivate and expedite the development of biomedical imaging analysis. This is a major reason for the fast growth of the discipline.
The Twelfth International Workshop on Maximum Entropy and Bayesian Methods in Sciences and Engineering (MaxEnt 92) was held in Paris, France, at the Centre National de la Recherche Scientifique (CNRS), July 19-24, 1992. It is important to note that, since its creation in 1980 by some of the researchers of the physics department at the Wyoming University in Laramie, this was the second time that it took place in Europe, the first time was in 1988 in Cambridge. The two specificities of MaxEnt workshops are their spontaneous and informal charac ters which give the participants the possibility to discuss easily and to make very fruitful scientific and friendship relations among each others. This year's organizers had fixed two main objectives: i) to have more participants from the European countries, and ii) to give special interest to maximum entropy and Bayesian methods in signal and image processing. We are happy to see that we achieved these objectives: i) we had about 100 participants with more than 50 per cent from the European coun tries, ii) we received many papers in the signal and image processing subjects and we could dedicate a full day of the workshop to the image modelling, restoration and recon struction problems."
A key element of any modern video codec is the efficient exploitation of temporal redundancy via motion-compensated prediction. In this book, a novel paradigm of representing and employing motion information in a video compression system is described that has several advantages over existing approaches. Traditionally, motion is estimated, modelled, and coded as a vector field at the target frame it predicts. While this "prediction-centric" approach is convenient, the fact that the motion is "attached" to a specific target frame implies that it cannot easily be re-purposed to predict or synthesize other frames, which severely hampers temporal scalability. In light of this, the present book explores the possibility of anchoring motion at reference frames instead. Key to the success of the proposed "reference-based" anchoring schemes is high quality motion inference, which is enabled by the use of a more "physical" motion representation than the traditionally employed "block" motion fields. The resulting compression system can support computationally efficient, high-quality temporal motion inference, which requires half as many coded motion fields as conventional codecs. Furthermore, "features" beyond compressibility - including high scalability, accessibility, and "intrinsic" framerate upsampling - can be seamlessly supported. These features are becoming ever more relevant as the way video is consumed continues shifting from the traditional broadcast scenario to interactive browsing of video content over heterogeneous networks. This book is of interest to researchers and professionals working in multimedia signal processing, in particular those who are interested in next-generation video compression. Two comprehensive background chapters on scalable video compression and temporal frame interpolation make the book accessible for students and newcomers to the field.
Nowadays, highly-detailed animations of live-actor performances are increasingly easier to acquire and 3D Video has reached considerable attentions in visual media production. In this book, we address the problem of extracting or acquiring and then reusing non-rigid parametrization for video-based animations. At first sight, a crucial challenge is to reproduce plausible boneless deformations while preserving global and local captured properties of dynamic surfaces with a limited number of controllable, flexible and reusable parameters. To solve this challenge, we directly rely on a skin-detached dimension reduction thanks to the well-known cage-based paradigm. First, we achieve Scalable Inverse Cage-based Modeling by transposing the inverse kinematics paradigm on surfaces. Thus, we introduce a cage inversion process with user-specified screen-space constraints. Secondly, we convert non-rigid animated surfaces into a sequence of optimal cage parameters via Cage-based Animation Conversion. Building upon this reskinning procedure, we also develop a well-formed Animation Cartoonization algorithm for multi-view data in term of cage-based surface exaggeration and video-based appearance stylization. Thirdly, motivated by the relaxation of prior knowledge on the data, we propose a promising unsupervised approach to perform Iterative Cage-based Geometric Registration. This novel registration scheme deals with reconstructed target point clouds obtained from multi-view video recording, in conjunction with a static and wrinkled template mesh. Above all, we demonstrate the strength of cage-based subspaces in order to reparametrize highly non-rigid dynamic surfaces, without the need of secondary deformations. To the best of our knowledge this book opens the field of Cage-based Performance Capture.
"Applications of Pulse-Coupled Neural Networks" explores the fields of image processing, including image filtering, image segmentation, image fusion, image coding, image retrieval, and biometric recognition, and the role of pulse-coupled neural networks in these fields. This book is intended for researchers and graduate students in artificial intelligence, pattern recognition, electronic engineering, and computer science. Prof. Yide Ma conducts research on intelligent information processing, biomedical image processing, and embedded system development at the School of Information Science and Engineering, Lanzhou University, China. |
You may like...
Cases on Open-Linked Data and Semantic…
Patricia Ordonez De Pablos, Miltiadis D Lytras, …
Hardcover
R4,398
Discovery Miles 43 980
Advancing Information Management through…
Patricia Ordonez De Pablos, Hector Oscar Nigro, …
Hardcover
R4,854
Discovery Miles 48 540
Fractals in Multimedia
Michael F. Barnsley, Dietmar Saupe, …
Hardcover
R4,090
Discovery Miles 40 900
Ubiquitous and Pervasive Knowledge and…
Miltiadis D Lytras, Ambjorn Naeve
Hardcover
R2,618
Discovery Miles 26 180
Social Web Evolution - Integrating…
Miltiadis D Lytras, Patricia Ordonez De Pablos
Hardcover
R4,935
Discovery Miles 49 350
|