![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Image processing > General
"Applications of Pulse-Coupled Neural Networks" explores the fields of image processing, including image filtering, image segmentation, image fusion, image coding, image retrieval, and biometric recognition, and the role of pulse-coupled neural networks in these fields. This book is intended for researchers and graduate students in artificial intelligence, pattern recognition, electronic engineering, and computer science. Prof. Yide Ma conducts research on intelligent information processing, biomedical image processing, and embedded system development at the School of Information Science and Engineering, Lanzhou University, China.
The fields of image analysis, computer vision, and artificial intelligence all make use of descriptions of shape in grey-level images. Most existing algorithms for the automatic recognition and classification of particular shapes have been devel oped for specific purposes, with the result that these methods are often restricted in their application. The use of advanced and theoretically well-founded math ematical methods should lead to the construction of robust shape descriptors having more general application. Shape description can be regarded as a meeting point of vision research, mathematics, computing science, and the application fields of image analy sis, computer vision, and artificial intelligence. The NATO Advanced Research Workshop "Shape in Picture" was organised with a twofold objective: first, it should provide all participants with an overview of relevant developments in these different disciplines; second, it should stimulate researchers to exchange original results and ideas across the boundaries of these disciplines. This book comprises a widely drawn selection of papers presented at the workshop, and many contributions have been revised to reflect further progress in the field. The focus of this collection is on mathematical approaches to the construction of shape descriptions from grey-level images. The book is divided into five parts, each devoted to a different discipline. Each part contains papers that have tutorial sections; these are intended to assist the reader in becoming acquainted with the variety of approaches to the problem."
This book provides a solid and uniform derivation of the various properties Bézier and B-spline representations have, and shows the beauty of the underlying rich mathematical structure. The book focuses on the core concepts of Computer Aided Geometric Design with the intension to give a clear and illustrative presentation of the basic principles, as well as a treatment of advanced material including multivariate splines, some subdivision techniques and constructions of free form surfaces with arbitrary smoothness.The text is beautifully illustrated with many excellent figures to emphasize the geometric constructive approach of this book.
The launch of Microsoft s Kinect, the first high-resolution depth-sensing camera for the consumer market, generated considerable excitement not only among computer gamers, but also within the global community of computer vision researchers. The potential of consumer depth cameras extends well beyond entertainment and gaming, to real-world commercial applications such virtual fitting rooms, training for athletes, and assistance for the elderly. This authoritative text/reference reviews the scope and impact of this rapidly growing field, describing the most promising Kinect-based research activities, discussing significant current challenges, and showcasing exciting applications. Topics and features: presents contributions from an international selection of preeminent authorities in their fields, from both academic and corporate research; addresses the classic problem of multi-view geometry of how to correlate images from different viewpoints to simultaneously estimate camera poses and world points; examines human pose estimation using video-rate depth images for gaming, motion capture, 3D human body scans, and hand pose recognition for sign language parsing; provides a review of approaches to various recognition problems, including category and instance learning of objects, and human activity recognition; with a Foreword by Dr. Jamie Shotton of Microsoft Research, Cambridge, UK. This broad-ranging overview is a must-read for researchers and graduate students of computer vision and robotics wishing to learn more about the state of the art of this increasingly hot topic."
Packed with more than 350 techniques, this book delivers what you need to know-on the spot. Its concise presentation of professional techniques is suited to experienced artists whether you are: * Migrating from another visual effects application * Upgrading to Houdini 9 * Seeking a handy reference to raise your proficiency with Houdini Houdini On the Spot presents immediate solutions in an accessible format. It clearly illustrates the essential methods that pros use to get the job done efficiently and creatively. Screenshots and step-by-step instructions show you how to: * Navigate and manipulate the version 9 interface * Create procedural models that can be modified quickly and efficiently with Surface Operators (SOPs) * Use Particle Operators (POPs) to build complex simulations with speed and precision * Minimize the number of operators in your simulations with Dynamics Operators (DOPs) * Extend Houdini with customized tools to include data or scripts with Houdini Digital Assets (HDAs) * Master the version 9 rendering options including Physically Based Rendering (PBR), volume rendering and motion blur * Quickly modify timing, geometry, space and rotational values of your animations with Channel Operators (CHOPs) * Create and manipulate elements with Composite Operators (COPs); Houdini's full-blown compositor toolset * Make your own SOPs, COPs, POPs, CHOPs, and shaders with the Vector Expressions (VEX) shading language * Configure the Houdini interface with customized environments and hotkeys * Mine the treasures of the dozens of standalone applications that are bundled with Houdini
The field of Intelligent Systems has expanded enormously during the last two decades with many theoretical and practical results already available, which are the outcome of the synergetic merging of classical fields such as system theory, artificial intelligence, information theory, soft computing, operations research, linguistic theory and others. This book presents a collection of timely contributions that cover a wide, well-selected range of topics within the field. The book contains forty-seven contributions with an emphasis on computational and processing issues. The book is structured in four parts, as follows: Part I: Computer-aided intelligent systems and tools; Part II: Information extraction from texts, natural language interfaces and intelligent retrieval systems; Part III: Image processing and video-based systems; Part IV: Applications Particular topics treated include: planning; problem solving; information extraction from texts; natural language interfaces; audio retrieval systems; multi-agent systems; image compression, image and segmentation, and human face recognition. Applications include: peri-urban road network extraction; analysis of structures; climatic sensor signal analysis; aortic pressure assessment; hospital laboratory planning; fatigue analysis using electromyographic signals; forecasting in power systems. The book can serve as a reference pool of knowledge that may inspire and motivate researchers and practitioners for further developments and modern-day applications. The teacher and student in related postgraduate and research programs can thereby save considerable time in searching the scattered literature in the field.
Multimedia Cartography provides a contemporary overview of the issues related to multimedia cartography and the design and production elements that are unique to this area of mapping. The book has been written for professional cartographers interested in moving into multimedia mapping, for cartographers already involved in producing multimedia titles who wish to discover the approaches that other practitioners in multimedia cartography have taken and for students and academics in the mapping sciences and related geographical fields wishing to update their knowledge about current issues related to cartographic design and production. It provides a new approach to cartography one based on the exploitation of the many rich media components and avant-garde approach that multimedia offers."
Effective Polynomial Computation is an introduction to the algorithms of computer algebra. It discusses the basic algorithms for manipulating polynomials including factoring polynomials. These algorithms are discussed from both a theoretical and practical perspective. Those cases where theoretically optimal algorithms are inappropriate are discussed and the practical alternatives are explained. Effective Polynomial Computation provides much of the mathematical motivation of the algorithms discussed to help the reader appreciate the mathematical mechanisms underlying the algorithms, and so that the algorithms will not appear to be constructed out of whole cloth. Preparatory to the discussion of algorithms for polynomials, the first third of this book discusses related issues in elementary number theory. These results are either used in later algorithms (e.g. the discussion of lattices and Diophantine approximation), or analogs of the number theoretic algorithms are used for polynomial problems (e.g. Euclidean algorithm and p-adic numbers). Among the unique features of Effective Polynomial Computation is the detailed material on greatest common divisor and factoring algorithms for sparse multivariate polynomials. In addition, both deterministic and probabilistic algorithms for irreducibility testing of polynomials are discussed.
Content-based multimedia retrieval is a challenging research field with many unsolved problems. This monograph details concepts and algorithms for robust and efficient information retrieval of two different types of multimedia data: waveform-based music data and human motion data. It first examines several approaches in music information retrieval, in particular general strategies as well as efficient algorithms. The book then introduces a general and unified framework for motion analysis, retrieval, and classification, highlighting the design of suitable features, the notion of similarity used to compare data streams, and data organization.
A resource like no other—the first comprehensive guide to phase unwrapping Phase unwrapping is a mathematical problem-solving technique increasingly used in synthetic aperture radar (SAR) interferometry, optical interferometry, adaptive optics, and medical imaging. In Two-Dimensional Phase Unwrapping, two internationally recognized experts sort through the multitude of ideas and algorithms cluttering current research, explain clearly how to solve phase unwrapping problems, and provide practicable algorithms that can be applied to problems encountered in diverse disciplines. Complete with case studies and examples as well as hundreds of images and figures illustrating the concepts, this book features:
Two-Dimensional Phase Unwrapping skillfully integrates concepts, algorithms, software, and examples into a powerful benchmark against which new ideas and algorithms for phase unwrapping can be tested. This unique introduction to a dynamic, rapidly evolving field is essential for professionals and graduate students in SAR interferometry, optical interferometry, adaptive optics, and magnetic resonance imaging (MRI).
An up-to-date, comprehensive review of surveillance and reconnaissance (S&R) imaging system modelling and performance prediction. This resource helps the reader predict the information potential of new surveillance system designs, compare and select from alternative measures of information extraction, relate the performance of tactical acquisition sensors and surveillance sensors, and understand the relative importance of each element of the image chain on S&R system performance. It provides system descriptions and characteristics, S&R modelling history, and performance modelling details. With an emphasis on validated prediction of human observer performance, this book addresses the specific design and analysis techniques used with today's S&R imaging systems. It offers in-depth discussions on everything from the conceptual performance prediction model, linear shift invariant systems, and measurement variables used for S&R information extraction to predictor variables, target and environmental considerations, CRT and flat panel display selection, and models for image processing. Conversion methods between alternative modelling approaches are examined to help the reader perform system comparisons.
The book includes insights that reflect the advances in the field of Internet of Things from upcoming researchers and leading academicians across the globe. It contains the high-quality peer-reviewed papers of 'International Conference on Internet of Things for Technological Development (IoT4TD 2017)', held at Kadi Sarva Vishvavidyalaya, Gandhinagar, Gujarat, India during April 1-2, 2017. The book covers variety of topics such as Internet of things, Intelligent Image Processing, Networks and Mobile Communications, Big Data and Cloud. The book is helpful for the perspective readers' from computer industry and academia to derive the advances of next generation communication and computational technology and shape them into real life applications.
This book presents various video processing methodologies that are useful for distance education. The motivation is to devise new multimedia technologies that are suitable for better representation of instructional videos by exploiting the temporal redundancies present in the original video. This solves many of the issues related to the memory and bandwidth limitation of lecture videos. The various methods described in the book focus on a key-frame based approach which is used to time shrink, repackage and retarget instructional videos. All the methods need a preprocessing step of shot detection and recognition, which is separately given as a chapter. We find those frames which are well-written and distinct as key-frames. A super-resolution based image enhancement scheme is suggested for refining the key-frames for better legibility. These key-frames, along with the audio and a meta-data for the mutual linkage among various media components form a repackaged lecture video, which on a programmed playback, render an estimate of the original video but at a substantially compressed form. The book also presents a legibility retentive retargeting of this instructional media on mobile devices with limited display size. All these technologies contribute to the enhancement of the outreach of distance education programs. Distance education is now a big business with an annual turnover of over 10-12 billion dollars. We expect this to increase rapidly. Use of the proposed technology will help deliver educational videos to those who are less endowed in terms of network bandwidth availability and to those everywhere who are even on a move by delivering it effectively to mobile handsets (including PDAs). Thus, technology developers, practitioners, and content providers will find the material very useful.
The book presents findings, views and ideas on what exact problems of image processing, pattern recognition and generation can be efficiently solved by cellular automata architectures. This volume provides a convenient collection in this area, in which publications are otherwise widely scattered throughout the literature. The topics covered include image compression and resizing; skeletonization, erosion and dilation; convex hull computation, edge detection and segmentation; forgery detection and content based retrieval; and pattern generation. The book advances the theory of image processing, pattern recognition and generation as well as the design of efficient algorithms and hardware for parallel image processing and analysis. It is aimed at computer scientists, software programmers, electronic engineers, mathematicians and physicists, and at everyone who studies or develops cellular automaton algorithms and tools for image processing and analysis, or develops novel architectures and implementations of massive parallel computing devices. The book will provide attractive reading for a general audience because it has do-it-yourself appeal: all the computer experiments presented within it can be implemented with minimal knowledge of programming. The simplicity yet substantial functionality of the cellular automaton approach, and the transparency of the algorithms proposed, makes the text ideal supplementary reading for courses on image processing, parallel computing, automata theory and applications."
This book describes recent innovations in 3D media and technologies, with coverage of 3D media capturing, processing, encoding, and adaptation, networking aspects for 3D Media, and quality of user experience (QoE). The contributions are based on the results of the FP7 European Project ROMEO, which focuses on new methods for the compression and delivery of 3D multi-view video and spatial audio, as well as the optimization of networking and compression jointly across the future Internet. The delivery of 3D media to individual users remains a highly challenging problem due to the large amount of data involved, diverse network characteristics and user terminal requirements, as well as the user's context such as their preferences and location. As the number of visual views increases, current systems will struggle to meet the demanding requirements in terms of delivery of consistent video quality to fixed and mobile users. ROMEO will present hybrid networking solutions that combine the DVB-T2 and DVB-NGH broadcast access network technologies together with a QoE aware Peer-to-Peer (P2P) distribution system that operates over wired and wireless links. Live streaming 3D media needs to be received by collaborating users at the same time or with imperceptible delay to enable them to watch together while exchanging comments as if they were all in the same location. This book is the last of a series of three annual volumes devoted to the latest results of the FP7 European Project ROMEO. The present volume provides state-of-the-art information on 3D multi-view video, spatial audio networking protocols for 3D media, P2P 3D media streaming, and 3D Media delivery across heterogeneous wireless networks among other topics. Graduate students and professionals in electrical engineering and computer science with an interest in 3D Future Internet Media will find this volume to be essential reading.
"Advances in computer technology and developments such as the Internet provide a constant momentum to design new techniques and algorithms to support computer graphics. Modelling, animation and rendering remain principal topics in the filed of computer graphics and continue to attract researchers around the world." This volume contains the papers presented at Computer Graphics International 2002, in July, at the University of Bradford, UK. These papers represent original research in computer graphics from around the world and cover areas such as:- Real-time computer animation - Image based rendering - Non photo-realistic rendering - Virtual reality - Avatars - Geometric and solid modelling - Computational geometry - Physically based modelling - Graphics hardware architecture - Data visualisation - Data compression The focus is on the commercial application and industrial use of computer graphics and digital media systems.
This book explains efficient solutions for segmenting the intensity levels of different types of multilevel images. The authors present hybrid soft computing techniques, which have advantages over conventional soft computing solutions as they incorporate data heterogeneity into the clustering/segmentation procedures. This is a useful introduction and reference for researchers and graduate students of computer science and electronics engineering, particularly in the domains of image processing and computational intelligence.
Biometrics-based authentication and identification are emerging as the most reliable method to authenticate and identify individuals. Biometrics requires that the person to be identified be physically present at the point-of-identification and relies on something which you are or you do' to provide better security, increased efficiency, and improved accuracy. Automated biometrics deals with physiological or behavioral characteristics such as fingerprints, signature, palmprint, iris, hand, voice and face that can be used to authenticate a person's identity or establish an identity from a database. With rapid progress in electronic and Internet commerce, there is also a growing need to authenticate the identity of a person for secure transaction processing. Designing an automated biometrics system to handle large population identification, accuracy and reliability of authentication are challenging tasks. Currently, there are over ten different biometrics systems that are either widely used or under development. Some automated biometrics, such as fingerprint identification and speaker verification, have received considerable attention over the past 25 years, and some issues like face recognition and iris-based authentication have been studied extensively resulting in successful development of biometrics systems in commercial applications. However, very few books are exclusively devoted to such issues of automated biometrics. Automated Biometrics: Technologies and Systems systematically introduces the technologies and systems, and explores how to design the corresponding systems with in-depth discussion. The issues addressed in this book are highly relevant to many fundamental concerns of both researchers and practitioners of automated biometrics in computer and system security.
Advancements in digital sensor technology, digital image analysis techniques, as well as computer software and hardware have brought together the fields of computer vision and photogrammetry, which are now converging towards sharing, to a great extent, objectives and algorithms. The potential for mutual benefits by the close collaboration and interaction of these two disciplines is great, as photogrammetric know-how can be aided by the most recent image analysis developments in computer vision, while modern quantitative photogrammetric approaches can support computer vision activities. Devising methodologies for automating the extraction of man-made objects (e.g. buildings, roads) from digital aerial or satellite imagery is an application where this cooperation and mutual support is already reaping benefits. The valuable spatial information collected using these interdisciplinary techniques is of improved qualitative and quantitative accuracy. This book offers a comprehensive selection of high-quality and in-depth contributions from world-wide leading research institutions, treating theoretical as well as implementational issues, and representing the state-of-the-art on this subject among the photogrammetric and computer vision communities.
This graduate-level text provides a language for understanding, unifying, and implementing a wide variety of algorithms for digital signal processing - in particular, to provide rules and procedures that can simplify or even automate the task of writing code for the newest parallel and vector machines. It thus bridges the gap between digital signal processing algorithms and their implementation on a variety of computing platforms. The mathematical concept of tensor product is a recurring theme throughout the book, since these formulations highlight the data flow, which is especially important on supercomputers. Because of their importance in many applications, much of the discussion centres on algorithms related to the finite Fourier transform and to multiplicative FFT algorithms.
129 6.2 Representation of hints. 131 6.3 Monotonicity hints .. . 134 6.4 Theory ......... . 139 6.4.1 Capacity results 140 6.4.2 Decision boundaries 144 6.5 Conclusion 145 6.6 References....... ... 146 7 Analysis and Synthesis Tools for Robust SPRness 147 C. Mosquera, J.R. Hernandez, F. Perez-Gonzalez 7.1 Introduction.............. 147 7.2 SPR Analysis of Uncertain Systems. 153 7.2.1 The Poly topic Case . 155 7.2.2 The ZP-Ball Case ...... . 157 7.2.3 The Roots Space Case ... . 159 7.3 Synthesis of LTI Filters for Robust SPR Problems 161 7.3.1 Algebraic Design for Two Plants ..... . 161 7.3.2 Algebraic Design for Three or More Plants 164 7.3.3 Approximate Design Methods. 165 7.4 Experimental results 167 7.5 Conclusions 168 7.6 References ..... . 169 8 Boundary Methods for Distribution Analysis 173 J.L. Sancho et aZ. 8.1 Introduction ............. . 173 8.1.1 Building a Classifier System . 175 8.2 Motivation ............. . 176 8.3 Boundary Methods as Feature-Set Evaluation 177 8.3.1 Results ................ . 179 8.3.2 Feature Set Evaluation using Boundary Methods: S- mary. . . . . . . . . . . . . . . . . . . .. . . 182 . . .
Despite their novelty, wavelets have a tremendous impact on a number of modern scientific disciplines, particularly on signal and image analysis. Because of their powerful underlying mathematical theory, they offer exciting opportunities for the design of new multi-resolution processing algorithms and effective pattern recognition systems. This book provides a much-needed overview of current trends in the practical application of wavelet theory. It combines cutting edge research in the rapidly developing wavelet theory with ideas from practical signal and image analysis fields. Subjects dealt with include balanced discussions on wavelet theory and its specific application in diverse fields, ranging from data compression to seismic equipment. In addition, the book offers insights into recent advances in emerging topics such as double density DWT, multiscale Bayesian estimation, symmetry and locality in image representation, and image fusion. Audience: This volume will be of interest to graduate students and researchers whose work involves acoustics, speech, signal and image processing, approximations and expansions, Fourier analysis, and medical imaging.
This book discusses research, methods, and recent developments in the interdisciplinary field that spans research in visualization, eye tracking, human-computer interaction, and psychology. It presents extended versions of papers from the First Workshop on Eye Tracking and Visualization (ETVIS), which was organized as a workshop of the IEEE VIS Conference 2015. Topics include visualization and visual analytics of eye-tracking data, metrics and cognitive models, eye-tracking experiments in the context of visualization interfaces, and eye tracking in 3D and immersive environments. The extended ETVIS papers are complemented by a chapter offering an overview of visualization approaches for analyzing eye-tracking data and a chapter that discusses electrooculography (EOG) as an alternative of acquiring information about eye movements. Covering scientific visualization, information visualization, and visual analytics, this book is a valuable resource for eye-tracking researchers within the visualization community.
In his paper Theory of Communication [Gab46], D. Gabor proposed the use of a family of functions obtained from one Gaussian by time-and frequency shifts. Each of these is well concentrated in time and frequency; together they are meant to constitute a complete collection of building blocks into which more complicated time-depending functions can be decomposed. The application to communication proposed by Gabor was to send the coeffi cients of the decomposition into this family of a signal, rather than the signal itself. This remained a proposal-as far as I know there were no seri ous attempts to implement it for communication purposes in practice, and in fact, at the critical time-frequency density proposed originally, there is a mathematical obstruction; as was understood later, the family of shifted and modulated Gaussians spans the space of square integrable functions [BBGK71, Per71] (it even has one function to spare [BGZ75] . . . ) but it does not constitute what we now call a frame, leading to numerical insta bilities. The Balian-Low theorem (about which the reader can find more in some of the contributions in this book) and its extensions showed that a similar mishap occurs if the Gaussian is replaced by any other function that is "reasonably" smooth and localized. One is thus led naturally to considering a higher time-frequency density. |
You may like...
Christmas at Historic Houses
Patricia Hart McMillan, Katharine Kaye McMillan
Hardcover
|