![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Image processing
Nowadays, highly-detailed animations of live-actor performances are increasingly easier to acquire and 3D Video has reached considerable attentions in visual media production. In this book, we address the problem of extracting or acquiring and then reusing non-rigid parametrization for video-based animations. At first sight, a crucial challenge is to reproduce plausible boneless deformations while preserving global and local captured properties of dynamic surfaces with a limited number of controllable, flexible and reusable parameters. To solve this challenge, we directly rely on a skin-detached dimension reduction thanks to the well-known cage-based paradigm. First, we achieve Scalable Inverse Cage-based Modeling by transposing the inverse kinematics paradigm on surfaces. Thus, we introduce a cage inversion process with user-specified screen-space constraints. Secondly, we convert non-rigid animated surfaces into a sequence of optimal cage parameters via Cage-based Animation Conversion. Building upon this reskinning procedure, we also develop a well-formed Animation Cartoonization algorithm for multi-view data in term of cage-based surface exaggeration and video-based appearance stylization. Thirdly, motivated by the relaxation of prior knowledge on the data, we propose a promising unsupervised approach to perform Iterative Cage-based Geometric Registration. This novel registration scheme deals with reconstructed target point clouds obtained from multi-view video recording, in conjunction with a static and wrinkled template mesh. Above all, we demonstrate the strength of cage-based subspaces in order to reparametrize highly non-rigid dynamic surfaces, without the need of secondary deformations. To the best of our knowledge this book opens the field of Cage-based Performance Capture.
Appendices 133 A Mathematical Results 133 A.1 Singularities of the Displacement Error Covariance Matrix 133 A.2 A Class of Matrices and their Eigenvalues 134 A.3 Inverse of the Power Spectral Density Matrix 134 A.4 Power Spectral Density of a Frame 136 Glossary 137 References 141 Index 159 Preface This book aims to capture recent advances in motion compensation for - ficient video compression. It investigates linearly combined motion comp- sated signals and generalizes the well known superposition for bidirectional prediction in B-pictures. The number of superimposed signals and the sel- tion of reference pictures will be important aspects of the discussion. The application oriented part of the book employs this concept to the well known ITU-T Recommendation H.263 and continues with the improvements by superimposed motion-compensated signals for the emerging ITU-T R- ommendation H.264 and ISO/IEC MPEG-4 (Part 10). In addition, it discusses a new approach for wavelet-based video coding. This technology is currently investigated by MPEG to develop a new video compression standard for the mid-term future.
Image Technology Design: A Perceptual Approach is an essential
reference for both academic and professional researchers in the
fields of image technology, image processing and coding, image
display, and image quality. It bridges the gap between academic
research on visual perception and image quality and applications of
such research in the design of imaging systems.
"Applications of Pulse-Coupled Neural Networks" explores the fields of image processing, including image filtering, image segmentation, image fusion, image coding, image retrieval, and biometric recognition, and the role of pulse-coupled neural networks in these fields. This book is intended for researchers and graduate students in artificial intelligence, pattern recognition, electronic engineering, and computer science. Prof. Yide Ma conducts research on intelligent information processing, biomedical image processing, and embedded system development at the School of Information Science and Engineering, Lanzhou University, China.
The Twelfth International Workshop on Maximum Entropy and Bayesian Methods in Sciences and Engineering (MaxEnt 92) was held in Paris, France, at the Centre National de la Recherche Scientifique (CNRS), July 19-24, 1992. It is important to note that, since its creation in 1980 by some of the researchers of the physics department at the Wyoming University in Laramie, this was the second time that it took place in Europe, the first time was in 1988 in Cambridge. The two specificities of MaxEnt workshops are their spontaneous and informal charac ters which give the participants the possibility to discuss easily and to make very fruitful scientific and friendship relations among each others. This year's organizers had fixed two main objectives: i) to have more participants from the European countries, and ii) to give special interest to maximum entropy and Bayesian methods in signal and image processing. We are happy to see that we achieved these objectives: i) we had about 100 participants with more than 50 per cent from the European coun tries, ii) we received many papers in the signal and image processing subjects and we could dedicate a full day of the workshop to the image modelling, restoration and recon struction problems."
Traditionally, scientific fields have defined boundaries, and scientists work on research problems within those boundaries. However, from time to time those boundaries get shifted or blurred to evolve new fields. For instance, the original goal of computer vision was to understand a single image of a scene, by identifying objects, their structure, and spatial arrangements. This has been referred to as image understanding. Recently, computer vision has gradually been making the transition away from understanding single images to analyz ing image sequences, or video understanding. Video understanding deals with understanding of video sequences, e. g., recognition of gestures, activities, fa cial expressions, etc. The main shift in the classic paradigm has been from the recognition of static objects in the scene to motion-based recognition of actions and events. Video understanding has overlapping research problems with other fields, therefore blurring the fixed boundaries. Computer graphics, image processing, and video databases have obvious overlap with computer vision. The main goal of computer graphics is to gener ate and animate realistic looking images, and videos. Researchers in computer graphics are increasingly employing techniques from computer vision to gener ate the synthetic imagery. A good example of this is image-based rendering and modeling techniques, in which geometry, appearance, and lighting is de rived from real images using computer vision techniques. Here the shift is from synthesis to analysis followed by synthesis."
The fields of image analysis, computer vision, and artificial intelligence all make use of descriptions of shape in grey-level images. Most existing algorithms for the automatic recognition and classification of particular shapes have been devel oped for specific purposes, with the result that these methods are often restricted in their application. The use of advanced and theoretically well-founded math ematical methods should lead to the construction of robust shape descriptors having more general application. Shape description can be regarded as a meeting point of vision research, mathematics, computing science, and the application fields of image analy sis, computer vision, and artificial intelligence. The NATO Advanced Research Workshop "Shape in Picture" was organised with a twofold objective: first, it should provide all participants with an overview of relevant developments in these different disciplines; second, it should stimulate researchers to exchange original results and ideas across the boundaries of these disciplines. This book comprises a widely drawn selection of papers presented at the workshop, and many contributions have been revised to reflect further progress in the field. The focus of this collection is on mathematical approaches to the construction of shape descriptions from grey-level images. The book is divided into five parts, each devoted to a different discipline. Each part contains papers that have tutorial sections; these are intended to assist the reader in becoming acquainted with the variety of approaches to the problem."
This textbook takes a case study approach to media and audience analytics. Realizing the best way to understand analytics in the digital age is to practice it, the authors have created a collection of cases using data sets that present real and hypothetical scenarios for students to work through. Media Analytics introduces the key principles of media economics and management. It outlines how to interpret and present results, the principles of data visualization and storytelling and the basics of research design and sampling. Although shifting technology makes measurement and analytics a dynamic space, this book takes an evergreen, conceptual approach, reminding students to focus on the principles and foundations that will remain constant. Aimed at upper-level students in the fast-growing area of media analytics in a cross-platform world, students using this text will learn how to find the stories in the data and to present those stories in an engaging way to others. Instructor and Student Resources include an Instructor's Manual, discussion questions, short exercises and links to additional resources. They are available online at www.routledge.com/cw/hollifield.
Managing and Mining Graph Data is a comprehensive survey book in
graph management and mining. It contains extensive surveys on a
variety of important graph topics such as graph languages,
indexing, clustering, data generation, pattern mining,
classification, keyword search, pattern matching, and privacy. It
also studies a number of domain-specific scenarios such as stream
mining, web graphs, social networks, chemical and biological data.
The chapters are written by well known researchers in the field,
and provide a broad perspective of the area. This is the first
comprehensive survey book in the emerging topic of graph data
processing.
This book provides a solid and uniform derivation of the various properties Bézier and B-spline representations have, and shows the beauty of the underlying rich mathematical structure. The book focuses on the core concepts of Computer Aided Geometric Design with the intension to give a clear and illustrative presentation of the basic principles, as well as a treatment of advanced material including multivariate splines, some subdivision techniques and constructions of free form surfaces with arbitrary smoothness.The text is beautifully illustrated with many excellent figures to emphasize the geometric constructive approach of this book.
The field of Intelligent Systems has expanded enormously during the last two decades with many theoretical and practical results already available, which are the outcome of the synergetic merging of classical fields such as system theory, artificial intelligence, information theory, soft computing, operations research, linguistic theory and others. This book presents a collection of timely contributions that cover a wide, well-selected range of topics within the field. The book contains forty-seven contributions with an emphasis on computational and processing issues. The book is structured in four parts, as follows: Part I: Computer-aided intelligent systems and tools; Part II: Information extraction from texts, natural language interfaces and intelligent retrieval systems; Part III: Image processing and video-based systems; Part IV: Applications Particular topics treated include: planning; problem solving; information extraction from texts; natural language interfaces; audio retrieval systems; multi-agent systems; image compression, image and segmentation, and human face recognition. Applications include: peri-urban road network extraction; analysis of structures; climatic sensor signal analysis; aortic pressure assessment; hospital laboratory planning; fatigue analysis using electromyographic signals; forecasting in power systems. The book can serve as a reference pool of knowledge that may inspire and motivate researchers and practitioners for further developments and modern-day applications. The teacher and student in related postgraduate and research programs can thereby save considerable time in searching the scattered literature in the field.
Go behind the scene of the behind the scenes to learn how the business of producing the dazzling visual effects we see in movies and on TV works. With decades of combined VFX production and supervisory experience in Hollywood, the authors share their experience with you, illuminating standard industry practices and tips on: * preproduction planning * scheduling * budgeting * evaluating vendors and the bidding process * effective data management * working on-set, off-set, or overseas * dealing with changes in post-production * legal issues (contracts, insurance, business ethics), and more Also included are interviews with established, successful Hollywood VFX Producers about their career paths and how they got to where they are now. From pre-production to final delivery, this is your complete guide to visual effects production, providing insight on VFX budgeting and scheduling (with actual forms for your own use) and common production techniques such as motion control, miniatures, and pre-visualization. Also included is a companion website (www.focalpress.com/cw/finance-9780240812632) with forms and documents for you to incorporate into your own VFX production workflows.
A continuation of 1994's groundbreaking Cartoons, Giannalberto Bendazzi's Animation: A World History is the largest, deepest, most comprehensive text of its kind, based on the idea that animation is an art form that deserves its own place in scholarship. Bendazzi delves beyond just Disney, offering readers glimpses into the animation of Russia, Africa, Latin America, and other often-neglected areas and introducing over fifty previously undiscovered artists. Full of first-hand, never before investigated, and elsewhere unavailable information, Animation: A World History encompasses the history of animation production on every continent over the span of three centuries. Volume I traces the roots and predecessors of modern animation, the history behind Emile Cohl's Fantasmagorie, and twenty years of silent animated films. Encompassing the formative years of the art form through its Golden Age, this book accounts for animation history through 1950 and covers everything from well-known classics like Steamboat Willie to animation in Egypt and Nazi Germany. With a wealth of new research, hundreds of photographs and film stills, and an easy-to-navigate organization, this book is essential reading for all serious students of animation history. Key Features Over 200 high quality head shots and film stills to add visual reference to your research Detailed information on hundreds of never-before researched animators and films Coverage of animation from more than 90 countries and every major region of the world Chronological and geographical organization for quick access to the information you're looking for
Effective Polynomial Computation is an introduction to the algorithms of computer algebra. It discusses the basic algorithms for manipulating polynomials including factoring polynomials. These algorithms are discussed from both a theoretical and practical perspective. Those cases where theoretically optimal algorithms are inappropriate are discussed and the practical alternatives are explained. Effective Polynomial Computation provides much of the mathematical motivation of the algorithms discussed to help the reader appreciate the mathematical mechanisms underlying the algorithms, and so that the algorithms will not appear to be constructed out of whole cloth. Preparatory to the discussion of algorithms for polynomials, the first third of this book discusses related issues in elementary number theory. These results are either used in later algorithms (e.g. the discussion of lattices and Diophantine approximation), or analogs of the number theoretic algorithms are used for polynomial problems (e.g. Euclidean algorithm and p-adic numbers). Among the unique features of Effective Polynomial Computation is the detailed material on greatest common divisor and factoring algorithms for sparse multivariate polynomials. In addition, both deterministic and probabilistic algorithms for irreducibility testing of polynomials are discussed.
Multimedia Cartography provides a contemporary overview of the issues related to multimedia cartography and the design and production elements that are unique to this area of mapping. The book has been written for professional cartographers interested in moving into multimedia mapping, for cartographers already involved in producing multimedia titles who wish to discover the approaches that other practitioners in multimedia cartography have taken and for students and academics in the mapping sciences and related geographical fields wishing to update their knowledge about current issues related to cartographic design and production. It provides a new approach to cartography one based on the exploitation of the many rich media components and avant-garde approach that multimedia offers."
A resource like no other—the first comprehensive guide to phase unwrapping Phase unwrapping is a mathematical problem-solving technique increasingly used in synthetic aperture radar (SAR) interferometry, optical interferometry, adaptive optics, and medical imaging. In Two-Dimensional Phase Unwrapping, two internationally recognized experts sort through the multitude of ideas and algorithms cluttering current research, explain clearly how to solve phase unwrapping problems, and provide practicable algorithms that can be applied to problems encountered in diverse disciplines. Complete with case studies and examples as well as hundreds of images and figures illustrating the concepts, this book features:
Two-Dimensional Phase Unwrapping skillfully integrates concepts, algorithms, software, and examples into a powerful benchmark against which new ideas and algorithms for phase unwrapping can be tested. This unique introduction to a dynamic, rapidly evolving field is essential for professionals and graduate students in SAR interferometry, optical interferometry, adaptive optics, and magnetic resonance imaging (MRI).
Looking to become more efficient using Unity? How to Cheat in Unity 5 takes a no-nonsense approach to help you achieve fast and effective results with Unity 5. Geared towards the intermediate user, HTC in Unity 5 provides content beyond what an introductory book offers, and allows you to work more quickly and powerfully in Unity. Packed full with easy-to-follow methods to get the most from Unity, this book explores time-saving features for interface customization and scene management, along with productivity-enhancing ways to work with rendering and optimization. In addition, this book features a companion website at www.alanthorn.net, where you can download the book's companion files and also watch bonus tutorial video content. Learn bite-sized tips and tricks for effective Unity workflows Become a more powerful Unity user through interface customization Enhance your productivity with rendering tricks, better scene organization and more Better understand Unity asset and import workflows Learn techniques to save you time and money during development
An up-to-date, comprehensive review of surveillance and reconnaissance (S&R) imaging system modelling and performance prediction. This resource helps the reader predict the information potential of new surveillance system designs, compare and select from alternative measures of information extraction, relate the performance of tactical acquisition sensors and surveillance sensors, and understand the relative importance of each element of the image chain on S&R system performance. It provides system descriptions and characteristics, S&R modelling history, and performance modelling details. With an emphasis on validated prediction of human observer performance, this book addresses the specific design and analysis techniques used with today's S&R imaging systems. It offers in-depth discussions on everything from the conceptual performance prediction model, linear shift invariant systems, and measurement variables used for S&R information extraction to predictor variables, target and environmental considerations, CRT and flat panel display selection, and models for image processing. Conversion methods between alternative modelling approaches are examined to help the reader perform system comparisons.
This textbook covers the theoretical backgrounds and practical aspects of image, video and audio feature expression, e.g., color, texture, edge, shape, salient point and area, motion, 3D structure, audio/sound in time, frequency and cepstral domains, structure and melody. Up-to-date algorithms for estimation, search, classification and compact expression of feature data are described in detail. Concepts of signal decomposition (such as segmentation, source tracking and separation), as well as composition, mixing, effects, and rendering, are discussed. Numerous figures and examples help to illustrate the aspects covered. The book was developed on the basis of a graduate-level university course, and most chapters are supplemented by problem-solving exercises. The book is also a self-contained introduction both for researchers and developers of multimedia content analysis systems in industry.
Content-based multimedia retrieval is a challenging research field with many unsolved problems. This monograph details concepts and algorithms for robust and efficient information retrieval of two different types of multimedia data: waveform-based music data and human motion data. It first examines several approaches in music information retrieval, in particular general strategies as well as efficient algorithms. The book then introduces a general and unified framework for motion analysis, retrieval, and classification, highlighting the design of suitable features, the notion of similarity used to compare data streams, and data organization.
This book presents various video processing methodologies that are useful for distance education. The motivation is to devise new multimedia technologies that are suitable for better representation of instructional videos by exploiting the temporal redundancies present in the original video. This solves many of the issues related to the memory and bandwidth limitation of lecture videos. The various methods described in the book focus on a key-frame based approach which is used to time shrink, repackage and retarget instructional videos. All the methods need a preprocessing step of shot detection and recognition, which is separately given as a chapter. We find those frames which are well-written and distinct as key-frames. A super-resolution based image enhancement scheme is suggested for refining the key-frames for better legibility. These key-frames, along with the audio and a meta-data for the mutual linkage among various media components form a repackaged lecture video, which on a programmed playback, render an estimate of the original video but at a substantially compressed form. The book also presents a legibility retentive retargeting of this instructional media on mobile devices with limited display size. All these technologies contribute to the enhancement of the outreach of distance education programs. Distance education is now a big business with an annual turnover of over 10-12 billion dollars. We expect this to increase rapidly. Use of the proposed technology will help deliver educational videos to those who are less endowed in terms of network bandwidth availability and to those everywhere who are even on a move by delivering it effectively to mobile handsets (including PDAs). Thus, technology developers, practitioners, and content providers will find the material very useful.
Get a broad overview of the different modalities of immersive video technologies-from omnidirectional video to light fields and volumetric video-from a multimedia processing perspective. From capture to representation, coding, and display, video technologies have been evolving significantly and in many different directions over the last few decades, with the ultimate goal of providing a truly immersive experience to users. After setting up a common background for these technologies, based on the plenoptic function theoretical concept, Immersive Video Technologies offers a comprehensive overview of the leading technologies enabling visual immersion, including omnidirectional (360 degrees) video, light fields, and volumetric video. Following the critical components of the typical content production and delivery pipeline, the book presents acquisition, representation, coding, rendering, and quality assessment approaches for each immersive video modality. The text also reviews current standardization efforts and explores new research directions. With this book the reader will a) gain a broad understanding of immersive video technologies that use three different modalities: omnidirectional video, light fields, and volumetric video; b) learn about the most recent scientific results in the field, including the recent learning-based methodologies; and c) understand the challenges and perspectives for immersive video technologies. |
![]() ![]() You may like...
Handbook of Pediatric Brain Imaging…
Hao Huang, Timothy Roberts
Paperback
R3,751
Discovery Miles 37 510
Recent Trends in Computer-aided…
Saptarshi Chatterjee, Debangshu Dey, …
Paperback
R2,729
Discovery Miles 27 290
Cardiovascular and Coronary Artery…
Ayman S. El-Baz, Jasjit S. Suri
Paperback
R4,040
Discovery Miles 40 400
|