![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Image processing > General
This book provides an in-depth overview of artificial intelligence and deep learning approaches with case studies to solve problems associated with biometric security such as authentication, indexing, template protection, spoofing attack detection, ROI detection, gender classification etc. This text highlights a showcase of cutting-edge research on the use of convolution neural networks, autoencoders, recurrent convolutional neural networks in face, hand, iris, gait, fingerprint, vein, and medical biometric traits. It also provides a step-by-step guide to understanding deep learning concepts for biometrics authentication approaches and presents an analysis of biometric images under various environmental conditions. This book is sure to catch the attention of scholars, researchers, practitioners, and technology aspirants who are willing to research in the field of AI and biometric security.
Create code art, visualizations, and interactive applications with this powerful yet simple computer language and programming environment Learn how to code 2D and 3D animation, pixel-level imaging, motion effects, and physics simulations Take a creative and fun approach to learning creative computer programming If you're interested in creating cutting-edge code-based art and animations, you've come to the right place! Processing (available at www.processing.org) is a revolutionary open source programming language and environment designed to bridge the gap between programming and art, allowing non-programmers to learn programming fundamentals as easily as possible, and empowering anyone to produce beautiful creations using math patterns. With the software freely available, Processing provides an accessible alternative to using Flash for creative coding and computational artboth on and off the Web. This book is written especially for artists, designers, and other creative professionals and students exploring code art, graphics programming, and computational aesthetics. The book provides a solid and comprehensive foundation in programming, including object-oriented principles, and introduces you to the easy-to-grasp Processing language, so no previous coding experience is necessary. The book then goes through using Processing to code lines, curves, shapes, and motion, continuing to the point where you'll have mastered Processing and can really start to unleash your creativity with realistic physics, interactivity, and 3D! In the final chapter, you'll even learn how to extend your Processing skills by working directly with the powerful Java programming languagethe language Processing itselfis built with. You'll learn: The fundamentals of creative computer programming--from procedural programming, to object-oriented programming, to pure Java programming How to virtually draw, paint, and sculpt using computer code and clearly explained mathematical concepts 2D and 3D programming techniques, motion design, and cool graphics effects How to code your own pixel-level imaging effects, such as image contrast, color saturation, custom gradients and more Advanced animation techniques, including realistic physics and artificial life simulation Summary of Contents PART ONE: THEORY OF PROCESSING AND COMPUTATIONAL ART Chapter 1: Code Art Chapter 2: Creative Coding Chapter 3: Code Grammar 101 Chapter 4: Computer Graphics, the Fun, Easy Way Chapter 5: The Processing Environment PART TWO: PUTTING THEORY INTO PRACTICE Chapter 6: Lines Chapter 7: Curves Chapter 8: Object-Oriented Programming Chapter 9: Shapes Chapter 10: Color and Imaging Chapter 11: Motion Chapter 12: Interactivity Chapter 13: 3D Chapter 14: 3D Rendering in Java Mode PART THREE: REFERENCE Appendix A: Processing Language API Appendix B: Math Reference Appendix C: Integrating Processing within Java
For junior- to graduate-level courses in computer graphics. Assuming no background in computer graphics, this junior- to graduate-level textbook presents basic principles for the design, use, and understanding of computer graphics systems and applications. The authors, authorities in their field, offer an integrated approach to two-dimensional and three-dimensional graphics topics. A comprehensive explanation of the popular OpenGL programming package, along with C++ programming examples illustrates applications of the various functions in the OpenGL basic library and the related GLU and GLUT packages.
The realistic generation of virtual doubles of real-world actors has been the focus of computer graphics research for many years. However, some problems still remain unsolved: it is still time-consuming to generate character animations using the traditional skeleton-based pipeline, passive performance capture of human actors wearing arbitrary everyday apparel is still challenging, and until now, there is only a limited amount of techniques for processing and modifying mesh animations, in contrast to the huge amount of skeleton-based techniques. In this thesis, we propose algorithmic solutions to each of these problems. First, two efficient mesh-based alternatives to simplify the overall character animation process are proposed. Although abandoning the concept of a kinematic skeleton, both techniques can be directly integrated in the traditional pipeline, generating animations with realistic body deformations. Thereafter, three passive performance capture methods are presented which employ a deformable model as underlying scene representation. The techniques are able to jointly reconstruct spatio-temporally coherent time-varying geometry, motion, and textural surface appearance of subjects wearing loose and everyday apparel. Moreover, the acquired high-quality reconstructions enable us to render realistic 3D Videos. At the end, two novel algorithms for processing mesh animations are described. The first one enables the fully-automatic conversion of a mesh animation into a skeletonbased animation and the second one automatically converts a mesh animation into an animation collage, a new artistic style for rendering animations. The methods described in the thesis can be regarded as solutions to specific problems or important building blocks for a larger application. As a whole, they form a powerful system to accurately capture, manipulate and realistically render realworld human performances, exceeding the capabilities of many related capture techniques. By this means, we are able to correctly capture the motion, the timevarying details and the texture information of a real human performing, and transform it into a fully-rigged character animation, that can be directly used by an animator, or use it to realistically display the actor from arbitrary viewpoints.
This book presents established and new approaches to perform calculations of electrostatic interactions at the nanoscale, with particular focus on molecular biology applications. It is based on the proceedings of the Computational Electrostatics for Biological Applications international meeting, which brought together researchers in computational disciplines to discuss and explore diverse methods to improve electrostatic calculations. Fostering an interdisciplinary approach to the description of complex physical and biological problems, this book encompasses contributions originating in the fields of geometry processing, shape modeling, applied mathematics, and computational biology and chemistry. The main topics covered are theoretical and numerical aspects of the solution of the Poisson-Boltzmann equation, surveys and comparison among geometric approaches to the modelling of molecular surfaces and related discretization and computational issues. It also includes a number of contributions addressing applications in biology, biophysics and nanotechnology. The book is primarily intended as a reference for researchers in the computational molecular biology and chemistry fields. As such, it also aims at becoming a key source of information for a wide range of scientists who need to know how modeling and computing at the molecular level may influence the design and interpretation of their experiments.
Welcome to the Second International IFIP Entertainment Computing Symposium on st Cultural Computing (ECS 2010), which was part of the 21 IFIP World Computer Congress, held in Brisbane, Australia during September 21-23, 2010. On behalf of the people who made this conference happen, we wish to welcome you to this inter- tional event. The IFIP World Computer Congress has offered an opportunity for researchers and practitioners to present their findings and research results in several prominent areas of computer science and engineering. In the last World Computer Congress, WCC 2008, held in Milan, Italy in September 2008, IFIP launched a new initiative focused on all the relevant issues concerning computing and entertainment. As a - sult, the two-day technical program of the First Entertainment Computing Symposium (ECS 2008) provided a forum to address, explore and exchange information on the state of the art of computer-based entertainment and allied technologies, their design and use, and their impact on society. Based on the success of ECS 2008, at this Second IFIP Entertainment Computing Symposium (ECS 2010), our challenge was to focus on a new area in entertainment computing: cultural computing.
This book is a comprehensive, hands-on guide to the basics of data mining and machine learning with a special emphasis on supervised and unsupervised learning methods. The book lays stress on the new ways of thinking needed to master in machine learning based on the Python, R, and Java programming platforms. This book first provides an understanding of data mining, machine learning and their applications, giving special attention to classification and clustering techniques. The authors offer a discussion on data mining and machine learning techniques with case studies and examples. The book also describes the hands-on coding examples of some well-known supervised and unsupervised learning techniques using three different and popular coding platforms: R, Python, and Java. This book explains some of the most popular classification techniques (K-NN, Naive Bayes, Decision tree, Random forest, Support vector machine etc,) along with the basic description of artificial neural network and deep neural network. The book is useful for professionals, students studying data mining and machine learning, and researchers in supervised and unsupervised learning techniques.
Intelligent multimedia surveillance concerns the analysis of multiple sensing inputs including video and audio streams, radio-frequency identification (RFID), and depth data. These data are processed for the automated detection and tracking of people, vehicles, and other objects. The goal is to locate moving targets, to understand their behavior, and to detect suspicious or abnormal activities for crime prevention. Despite its benefits, there is societal apprehension regarding the use of such technology, so an important challenge in this research area is to balance public safety and privacy. This edited book presents recent findings in the field of intelligent multimedia surveillance emerging from disciplines such as multimedia computing, computer vision, and artificial intelligence. It consists of nine chapters addressing intelligent video surveillance, video analysis of crowds, privacy issues in intelligent multimedia surveillance, RFID technology for localization of objects, object tracking using visual saliency information, estimating multiresolution depth using active stereo vision, and performance evaluation for video surveillance systems. The book will be of value to researchers and practitioners working on related problems in security, multimedia, and artificial intelligence."
First Published in 1988. Routledge is an imprint of Taylor & Francis, an informa company.
First published in 1988. Routledge is an imprint of Taylor & Francis, an informa company.
Modern image processing techniques are based on multiresolution geometrical methods of image representation. These methods are efficient in sparse approximation of digital images. There is a wide family of functions called simply X-lets, and these methods can be divided into two groups: the adaptive and the nonadaptive. This book is devoted to the adaptive methods of image approximation, especially to multismoothlets. Besides multismoothlets, several other new ideas are also covered. Current literature considers the black and white images with smooth horizon function as the model for sparse approximation but here, the class of blurred multihorizon is introduced, which is then used in the approximation of images with multiedges. Additionally, the semi-anisotropic model of multiedge representation, the introduction of the shift invariant multismoothlet transform and sliding multismoothlets are also covered. "Geometrical Multiresolution Adaptive Transforms" should be accessible to both mathematicians and computer scientists. It is suitable as a professional reference for students, researchers and engineers, containing many open problems and will be an excellent starting point for those who are beginning new research in the area or who want to use geometrical multiresolution adaptive methods in image processing, analysis or compression."
Still Image Compression on Parallel Computer Architectures investigates the application of parallel-processing techniques to digital image compression. Digital image compression is used to reduce the number of bits required to store an image in computer memory and/or transmit it over a communication link. Over the past decade advancements in technology have spawned many applications of digital imaging, such as photo videotex, desktop publishing, graphics arts, color facsimile, newspaper wire phototransmission and medical imaging. For many other contemporary applications, such as distributed multimedia systems, rapid transmission of images is necessary. Dollar cost as well as time cost of transmission and storage tend to be directly proportional to the volume of data. Therefore, application of digital image compression techniques becomes necessary to minimize costs. A number of digital image compression algorithms have been developed and standardized. With the success of these algorithms, research effort is now directed towards improving implementation techniques. The Joint Photographic Experts Group (JPEG) and Motion Photographic Experts Group(MPEG) are international organizations which have developed digital image compression standards. Hardware (VLSI chips) which implement the JPEG image compression algorithm are available. Such hardware is specific to image compression only and cannot be used for other image processing applications. A flexible means of implementing digital image compression algorithms is still required. An obvious method of processing different imaging applications on general purpose hardware platforms is to develop software implementations. JPEG uses an 8 A- 8 blockof image samples as the basic element for compression. These blocks are processed sequentially. There is always the possibility of having similar blocks in a given image. If similar blocks in an image are located, then repeated compression of these blocks is not necessary. By locating similar blocks in the image, the speed of compression can be increased and the size of the compressed image can be reduced. Based on this concept an enhancement to the JPEG algorithm is proposed, called Bock Comparator Technique (BCT). Still Image Compression on Parallel Computer Architectures is designed for advanced students and practitioners of computer science. This comprehensive reference provides a foundation for understanding digital image compression techniques and parallel computer architectures.
Topology-based methods are of increasing importance in the analysis and visualization of dataset from a wide variety of scientific domains such as biology, physics, engineering, and medicine. Current challenges of topology-based techniques include the management of time-dependent data, the representation large and complex datasets, the characterization of noise and uncertainty, the effective integration of numerical methods with robust combinatorial algorithms, etc. (see also below for a list of selected issues). While there is an increasing number of high-quality publications in this field, many fundamental questions remain unsolved. New focused efforts are needed in a variety of techniques ranging from the theoretical foundations of topological models, algorithmic issues related to the representation power of computer-based implementations as well as their computational efficiency, user interfaces for presentation of quantitative topological information, and the development of new techniques for systematic mapping of science problems in topological constructs that can be solved computationally. In this forum the editors have brought together the most prominent and best recognized researchers in the field of topology-based data analysis and visualization for a joint discussion and scientific exchange of the latest results in the field. The 2009 workshop in Snowbird, Utah, follows the two successful workshops in 2005 (Budmerice, Slovakia) and 2007 (Leipzig, Germany).
Imagine a world where machines can see and understand the world the way humans do. Rapid progress in artificial intelligence has led to smartphones that recognize faces, cars that detect pedestrians, and algorithms that suggest diagnoses from clinical images, among many other applications. The success of computer vision is founded on a deep understanding of the neural circuits in the brain responsible for visual processing. This book introduces the neuroscientific study of neuronal computations in visual cortex alongside of the psychological understanding of visual cognition and the burgeoning field of biologically-inspired artificial intelligence. Topics include the neurophysiological investigation of visual cortex, visual illusions, visual disorders, deep convolutional neural networks, machine learning, and generative adversarial networks among others. It is an ideal resource for students and researchers looking to build bridges across different approaches to studying and developing visual systems.
Imagine a world where machines can see and understand the world the way humans do. Rapid progress in artificial intelligence has led to smartphones that recognize faces, cars that detect pedestrians, and algorithms that suggest diagnoses from clinical images, among many other applications. The success of computer vision is founded on a deep understanding of the neural circuits in the brain responsible for visual processing. This book introduces the neuroscientific study of neuronal computations in visual cortex alongside of the psychological understanding of visual cognition and the burgeoning field of biologically-inspired artificial intelligence. Topics include the neurophysiological investigation of visual cortex, visual illusions, visual disorders, deep convolutional neural networks, machine learning, and generative adversarial networks among others. It is an ideal resource for students and researchers looking to build bridges across different approaches to studying and developing visual systems.
1.1 Digital Optics as a Subject Improvement of the quality of optical devices has always been the central task of experimental optics. In modern terms, improvements in sensitivity and resolution have equated higher quality with greater informational throughput. For most of today's applications, optics and electronics have, in essence, solved the problem of generating high quality pictures with great informational ca pacity. Effective use of the enormous amount of information contained in the images necessitates processing pictures, holograms, and interferograms. The manner in which information might be extracted from optical entities has be come a topic of current interest. The informational aspects of optical signals and systems might serve as a basis for attacking this question by making use of information theory and signal communication theory, and by enlisting modern tools and methods for data processing (the most important and powerful of which are those of digi tal computation). Exploiting modern advances in electronics has allowed new wavelength ranges and new kinds of radiation to be used in optics. Comput ers have extended our knowledge of the informational essence of radiation. Thus, computerized optical devices enhance not only the optical capabilities of sight, but also its analytical capabilities as well, thus opening qualitatively new horizons to all the areas in which optical devices have found application."
Efficient parallel solutions have been found to many problems. Some of them can be obtained automatically from sequential programs, using compilers. However, there is a large class of problems - irregular problems - that lack efficient solutions. IRREGULAR 94 - a workshop and summer school organized in Geneva - addressed the problems associated with the derivation of efficient solutions to irregular problems. This book, which is based on the workshop, draws on the contributions of outstanding scientists to present the state of the art in irregular problems, covering aspects ranging from scientific computing, discrete optimization, and automatic extraction of parallelism. Audience: This first book on parallel algorithms for irregular problems is of interest to advanced graduate students and researchers in parallel computer science.
Get a broad overview of the different modalities of immersive video technologies-from omnidirectional video to light fields and volumetric video-from a multimedia processing perspective. From capture to representation, coding, and display, video technologies have been evolving significantly and in many different directions over the last few decades, with the ultimate goal of providing a truly immersive experience to users. After setting up a common background for these technologies, based on the plenoptic function theoretical concept, Immersive Video Technologies offers a comprehensive overview of the leading technologies enabling visual immersion, including omnidirectional (360 degrees) video, light fields, and volumetric video. Following the critical components of the typical content production and delivery pipeline, the book presents acquisition, representation, coding, rendering, and quality assessment approaches for each immersive video modality. The text also reviews current standardization efforts and explores new research directions. With this book the reader will a) gain a broad understanding of immersive video technologies that use three different modalities: omnidirectional video, light fields, and volumetric video; b) learn about the most recent scientific results in the field, including the recent learning-based methodologies; and c) understand the challenges and perspectives for immersive video technologies.
This is nothing less than a totally essential reference for engineers and researchers in any field of work that involves the use of compressed imagery. Beginning with a thorough and up-to-date overview of the fundamentals of image compression, the authors move on to provide a complete description of the JPEG2000 standard. They then devote space to the implementation and exploitation of that standard. The final section describes other key image compression systems. This work has specific applications for those involved in the development of software and hardware solutions for multimedia, internet, and medical imaging applications. Included is a CD-ROM that provides a complete C++ implementation of JPEG2000 Part 1.
'Phase-only Fresnel holograms,' which can be displayed on a single SLM without the need for lenses or complicated optical accessories, substantially simplifies 3-D holographic display systems. Exploring essential concepts, theories, and formulations of these phase-only Fresnel holograms, this book provides comprehensive coverage of modern methods for generating such holograms, which pave the way for commercial products such as compact holographic projectors, heads-up displays, and data security enhancement. Relevant MATLAB codes are provided for readers to implement and evaluate the theories and formulations of different methods, and can be used as a quick start framework for further research and development. This is a crucial and up-to-date treatment of phase-only Fresnel holograms for students and researchers in electrical and electronic engineering, computer science/engineering, applied physics, information technology, and multimedia technology, as well as engineers and scientists in industry developing new products on 3-D displays and holographic projection.
This book provides an overview of different deep learning-based methods for face recognition and related problems. Specifically, the authors present methods based on autoencoders, restricted Boltzmann machines, and deep convolutional neural networks for face detection, localization, tracking, recognition, etc. The authors also discuss merits and drawbacks of available approaches and identifies promising avenues of research in this rapidly evolving field. Even though there have been a number of different approaches proposed in the literature for face recognition based on deep learning methods, there is not a single book available in the literature that gives a complete overview of these methods. The proposed book captures the state of the art in face recognition using various deep learning methods, and it covers a variety of different topics related to face recognition. This book is aimed at graduate students studying electrical engineering and/or computer science. Biometrics is a course that is widely offered at both undergraduate and graduate levels at many institutions around the world: This book can be used as a textbook for teaching topics related to face recognition. In addition, the work is beneficial to practitioners in industry who are working on biometrics-related problems. The prerequisites for optimal use are the basic knowledge of pattern recognition, machine learning, probability theory, and linear algebra.
Digital Image Processing with C++ presents the theory of digital image processing, and implementations of algorithms using a dedicated library. Processing a digital image means transforming its content (denoising, stylizing, etc.), or extracting information to solve a given problem (object recognition, measurement, motion estimation, etc.). This book presents the mathematical theories underlying digital image processing, as well as their practical implementation through examples of algorithms implemented in the C++ language, using the free and easy-to-use CImg library. Chapters cover in a broad way the field of digital image processing and proposes practical and functional implementations of each method theoretically described. The main topics covered include filtering in spatial and frequency domains, mathematical morphology, feature extraction and applications to segmentation, motion estimation, multispectral image processing and 3D visualization. Students or developers wishing to discover or specialize in this discipline, teachers and researchers wishing to quickly prototype new algorithms, or develop courses, will all find in this book material to discover image processing or deepen their knowledge in this field.
This book is aimed at those using colour image processing or researching new applications or techniques of colour image processing. It has been clear for some time that there is a need for a text dedicated to colour. We foresee a great increase in the use of colour over the coming years, both in research and in industrial and commercial applications. We are sure this book will prove a useful reference text on the subject for practicing engineers and scientists, for researchers, and for students at doctoral and, perhaps masters, level. It is not intended as an introductory text on image processing, rather it assumes that the reader is already familiar with basic image processing concepts such as image representation in digital form, linear and non-linear filtering, trans forms, edge detection and segmentation, and so on, and has some experience with using, at the least, monochrome equipment. There are many books cov ering these topics and some of them are referenced in the text, where appro priate. The book covers a restricted, but nevertheless, a very important, subset of image processing concerned with natural colour (that is colour as per ceived by the human visual system). This is an important field because it shares much technology and basic theory with colour television and video equipment, the market for which is worldwide and very large; and with the growing field of multimedia, including the use of colour images on the Inter net.
Machine Vision Algorithms in Java provides a comprehensive introduction to the algorithms and techniques associated with machine vision systems. The Java programming language is also introduced, with particular reference to its imaging capabilities. The book contains explanations of key machine vision techniques and algorithms, along with the associated Java source code.Special features include: - A complete self-contained treatment of the topics and techniques essential to the understanding and implementation of machine vision.- An introduction to object-oriented programming and to the Java programming language, with particular reference to its imaging capabilities.- Java source code for a wide range of practical image processing and analysis functions.- Readers will be given the opportunity to download a fully functional Java-based visual programming environment for machine vision, available via the WWW. This contains over 200 image processing, manipulation and analysis functions and will enable users to implement many of the ideas covered in this book. - Details relating to the design of a Java-based visual programming environment for machine vision.- An introduction to the Java 2D imaging and Java Advanced Imaging (JAI) APIs- A wide range of illustrative examples.- Practical treatment of the subject matter. This book is aimed at senior undergraduate and postgraduate students in engineering and computer science as well as practitioners in machine vision who may wish to update or expand their knowledge of the subject. The techniques and algorithms of machine vision are expounded in a way that will be understood not only by specialists but also by those who are less familiar with the topic. |
You may like...
Infrastructure Computer Vision
Ioannis Brilakis, Carl Thomas Michael Haas
Paperback
R3,039
Discovery Miles 30 390
Models and Inferences in Science
Emiliano Ippoliti, Fabio Sterpetti, …
Hardcover
R3,359
Discovery Miles 33 590
Handbook of Formal Languages - Volume 2…
Grzegorz Rozenberg, Arto Salomaa
Hardcover
R6,121
Discovery Miles 61 210
Proceedings of All India Seminar on…
Veerendra Kumar, Mukta Bhatele
Hardcover
R6,133
Discovery Miles 61 330
|