![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Image processing
This is the third edition of Character Development and Storytelling for Games, a standard work in the field that brings all of the teaching from the first two books up to date and tackles the new challenges of today. Professional game writer and designer Lee Sheldon combines his experience and expertise in this updated edition. New examples, new game types, and new challenges throughout the text highlight the fundamentals of character writing and storytelling. But this book is not just a box of techniques for writers of video games. It is an exploration of the roots of character development and storytelling that readers can trace from Homer to Chaucer to Cervantes to Dickens and even Mozart. Many contemporary writers also contribute insights from books, plays, television, films, and, yes, games. Sheldon and his contributors emphasize the importance of creative instinct and listening to the inner voice that guides successful game writers and designers. Join him on his quest to instruct, inform, and maybe even inspire your next great game.
This book is aimed at those using colour image processing or researching new applications or techniques of colour image processing. It has been clear for some time that there is a need for a text dedicated to colour. We foresee a great increase in the use of colour over the coming years, both in research and in industrial and commercial applications. We are sure this book will prove a useful reference text on the subject for practicing engineers and scientists, for researchers, and for students at doctoral and, perhaps masters, level. It is not intended as an introductory text on image processing, rather it assumes that the reader is already familiar with basic image processing concepts such as image representation in digital form, linear and non-linear filtering, trans forms, edge detection and segmentation, and so on, and has some experience with using, at the least, monochrome equipment. There are many books cov ering these topics and some of them are referenced in the text, where appro priate. The book covers a restricted, but nevertheless, a very important, subset of image processing concerned with natural colour (that is colour as per ceived by the human visual system). This is an important field because it shares much technology and basic theory with colour television and video equipment, the market for which is worldwide and very large; and with the growing field of multimedia, including the use of colour images on the Inter net.
RUNS, JUMPS AND SKIPS From Richard Williams' The Animator's Survival Kit comes key chapters in mini form. The Animator's Survival Kit is the essential tool for animators. However, sometimes you don't want to carry the hefty expanded edition around with you to your college or studio if you're working on just one aspect of it that day. The Animation Minis take some of the most essential chapters and make them available in smaller, lightweight, hand-bag/backpack size versions. Easy to carry. Easy to study. This Mini focuses on Runs, Jumps and Skips. As with Walks, the way we run shows our character and personality. A lazy, heavy person is going to run very differently to an athletic ten-year-old girl. Richard Williams demonstrates how - when you're doing a walk and you take both legs off the ground, at the same time and for just one frame - a walk becomes a run. So, all the things we do with walks, we can do with runs. This Mini presents a collection of Williams' runs, jumps and skips inspired by some of the cleverest artists from the Golden Age of Animation
Machine Vision Algorithms in Java provides a comprehensive introduction to the algorithms and techniques associated with machine vision systems. The Java programming language is also introduced, with particular reference to its imaging capabilities. The book contains explanations of key machine vision techniques and algorithms, along with the associated Java source code.Special features include: - A complete self-contained treatment of the topics and techniques essential to the understanding and implementation of machine vision.- An introduction to object-oriented programming and to the Java programming language, with particular reference to its imaging capabilities.- Java source code for a wide range of practical image processing and analysis functions.- Readers will be given the opportunity to download a fully functional Java-based visual programming environment for machine vision, available via the WWW. This contains over 200 image processing, manipulation and analysis functions and will enable users to implement many of the ideas covered in this book. - Details relating to the design of a Java-based visual programming environment for machine vision.- An introduction to the Java 2D imaging and Java Advanced Imaging (JAI) APIs- A wide range of illustrative examples.- Practical treatment of the subject matter. This book is aimed at senior undergraduate and postgraduate students in engineering and computer science as well as practitioners in machine vision who may wish to update or expand their knowledge of the subject. The techniques and algorithms of machine vision are expounded in a way that will be understood not only by specialists but also by those who are less familiar with the topic.
This book is dedicated for engineers and researchers who would like to increase the knowledge in area of mobile mapping systems. Therefore, the flow of the derived information is divided into subproblems corresponding to certain mobile mapping data and related observations' equations. The proposed methodology is not fulfilling all SLAM aspects evident in the literature, but it is based on the experience within the context of the pragmatic and realistic applications. Thus, it can be supportive information for those who are familiar with SLAM and would like to have broader overview in the subject. The novelty is a complete and interdisciplinary methodology for large-scale mobile mapping applications. The contribution is a set of programming examples available as supportive complementary material for this book. All observation equations are implemented, and for each, the programming example is provided. The programming examples are simple C++ implementations that can be elaborated by students or engineers; therefore, the experience in coding is not mandatory. Moreover, since the implementation does not require many additional external programming libraries, it can be easily integrated with any mobile mapping framework. Finally, the purpose of this book is to collect all necessary observation equations and solvers to build computational system capable providing large-scale maps.
This book includes best selected, high-quality research papers presented at the International Conference on Intelligent Manufacturing and Energy Sustainability (ICIMES 2021) held at the Department of Mechanical Engineering, Malla Reddy College of Engineering & Technology (MRCET), Maisammaguda, Hyderabad, India, during June 18-19, 2021. It covers topics in the areas of automation, manufacturing technology and energy sustainability and also includes original works in the intelligent systems, manufacturing, mechanical, electrical, aeronautical, materials, automobile, bioenergy and energy sustainability.
Single and Multi-Objective Evolutionary Computation (MOEA), Genetic Algorithms (GAs), Artificial Neural Networks (ANNs), Fuzzy Controllers (FCs), Particle Swarm Optimization (PSO) and Ant colony Optimization (ACO) are becoming omnipresent in almost every intelligent system design. Unfortunately, the application of the majority of these techniques is complex and so requires a huge computational effort to yield useful and practical results. Therefore, dedicated hardware for evolutionary, neural and fuzzy computation is a key issue for designers. With the spread of reconfigurable hardware such as FPGAs, digital as well as analog hardware implementations of such computation become cost-effective.The idea behind this book is to offer a variety of hardware designs for soft computing techniques that can be embedded in any final product. Also, to introduce the successful application of soft computing technique to solve many hard problems encountered during the design of embedded hardware designs. Reconfigurable embedded designs for GAs, ANNs, FCs and PSO are presented and evaluated. Also, the application of quantum-based evolutionary computation and multi-objective evolutionary computation as well as ACO are applied to solve hard problems related to circuit synthesis, IP assignment, mapping and routing of applications on Network-On-Chip infrastructures."
Adaptive filtering is a classical branch of digital signal processing (DSP). Industrial interest in adaptive filtering grows continuously with the increase in computer performance that allows ever more conplex algorithms to be run in real-time. Change detection is a type of adaptive filtering for non-stationary signals and is also the basic tool in fault detection and diagnosis. Often considered as separate subjects Adaptive Filtering and Change Detection bridges a gap in the literature with a unified treatment of these areas, emphasizing that change detection is a natural extension of adaptive filters, and adaptive filters are the basic building blocks in all change detectors.
The sixth edition has been revised and extended. The whole textbook is now clearly partitioned into basic and advanced material in order to cope with the ever-increasing field of digital image processing. In this way, you can first work your way through the basic principles of digital image processing without getting overwhelmed by the wealth of the material and then extend your studies to selected topics of interest. Each chapter now includes exercises that help you to test your understanding, train your skills, and introduce you to real-world image processing tasks. An important part of the exercises is a wealth of interactive computer exercises, which cover all topics of this textbook. These exercises are performed with the image processing software heurisko, which is included on the accompanying CD-ROM. In this way you can get own practical experience with almost all topics and algorithms covered by this book. The complete hyperlinked text of the book is now available on the accompanying CD-ROM.
This book serves as the first guideline of the integrative approach, optimal for our new and young generations. Recent technology advancements in computer vision, IoT sensors, and analytics open the door to highly impactful innovations and applications as a result of effective and efficient integration of those. Such integration has brought to scientists and engineers a new approach -the integrative approach. This offers far more rapid development and scalable architecting when comparing to the traditional hardcore developmental approach. Featuring biomedical and healthcare challenges including COVID-19, we present a collection of carefully selective cases with significant added- values as a result of integrations, e.g., sensing with AI, analytics with different data sources, and comprehensive monitoring with many different sensors, while sustaining its readability.
This book features selected papers presented at the 3rd International Conference on Recent Innovations in Computing (ICRIC 2020), held on 20-21 March 2020 at the Central University of Jammu, India, and organized by the university's Department of Computer Science & Information Technology. It includes the latest research in the areas of software engineering, cloud computing, computer networks and Internet technologies, artificial intelligence, information security, database and distributed computing, and digital India.
This second edition of G. Winkler's successful book on random field approaches to image analysis, related Markov Chain Monte Carlo methods, and statistical inference with emphasis on Bayesian image analysis concentrates more on general principles and models and less on details of concrete applications. Addressed to students and scientists from mathematics, statistics, physics, engineering, and computer science, it will serve as an introduction to the mathematical aspects rather than a survey. Basically no prior knowledge of mathematics or statistics is required.The second edition is in many parts completely rewritten and improved, and most figures are new. The topics of exact sampling and global optimization of likelihood functions have been added. This second edition comes with a CD-ROM by F. Friedrich,containing a host of (live) illustrations for each chapter. In an interactive environment, readers can perform their own experiments to consolidate the subject.
The idea for this text emerged over several years as the authors participated in research projects related to analysis of data from NASA's RHESSI Small Explorer mission. The data produced over the operational lifetime of this mission inspired many investigations related to a specific science question: the when, where, and how of electron acceleration during solar flares in the stressed magnetic environment of the active Sun. A vital key to unlocking this science problem is the ability to produce high-quality images of hard X-rays produced by bremsstrahlung radiation from electrons accelerated during a solar flare. The only practical way to do this within the technological and budgetary limitations of the RHESSI era was to opt for indirect modalities in which imaging information is encoded as a set of two-dimensional spatial Fourier components. Radio astronomers had employed Fourier imaging for many years. However, differently than for radio astronomy, X-ray images produced by RHESSI had to be constructed from a very limited number of sparsely distributed and very noisy Fourier components. Further, Fourier imaging is hardly intuitive, and extensive validation of the methods was necessary to ensure that they produced images with sufficient accuracy and fidelity for scientific applications. This book summarizes the results of this development of imaging techniques specifically designed for this form of data. It covers a set of published works that span over two decades, during which various imaging methods were introduced, validated, and applied to observations. Also considering that a new Fourier-based telescope, STIX, is now entering its nominal phase on-board the ESA Solar Orbiter, it became more and more apparent to the authors that it would be a good idea to put together a compendium of these imaging methods and their applications. Hence the book you are now reading.
This book gives a start-to-finish overview of the whole Fish4Knowledge project, in 18 short chapters, each describing one aspect of the project. The Fish4Knowledge project explored the possibilities of big video data, in this case from undersea video. Recording and analyzing 90 thousand hours of video from ten camera locations, the project gives a 3 year view of fish abundance in several tropical coral reefs off the coast of Taiwan. The research system built a remote recording network, over 100 Tb of storage, supercomputer processing, video target detection and tracking, fish species recognition and analysis, a large SQL database to record the results and an efficient retrieval mechanism. Novel user interface mechanisms were developed to provide easy access for marine ecologists, who wanted to explore the dataset. The book is a useful resource for system builders, as it gives an overview of the many new methods that were created to build the Fish4Knowledge system in a manner that also allows readers to see how all the components fit together.
This book constitutes the refereed post-conference proceedings of the First IFIP TC 5 International Conference on Computer Science Protecting Human Society Against Epidemics, ANTICOVID 2021, held virtually in June 2021.The 7 full and 4 short papers presented were carefully reviewed and selected from 20 submissions. The papers are concerned with a very large spectrum of problems, ranging from linguistics for automatic translation of medical terms, to a proposition for a worldwide system of fast reaction to emerging pandemic.
The material presented in this book originates from the first Eurographics Workshop on Graphics Hardware, held in Lisbon, Portugal, in August 1986. Leading experts in the field present the state of their ongoing graphics hardware projects and give their individual views of future developments. The final versions of the contributions are written in the light of in-depth discussions at the workshop. The book starts a series of EurographicSeminars volumes on the state-of-the-art of graphics hardware. This volume presents a unique collection of material which covers the following topics: workstation architectures, traffic simulators, hardware support for geometric modeling, ray-tracing. It therefore will be of interest to all computer graphics professionals wishing to gain a deeper knowledge of graphics hardware.
Offering a practical alternative to the conventional methods used in signal processing applications, this book discloses numerical techniques and explains how to evaluate the frequency-domain attributes of a waveform without resorting to actual transformation through Fourier methods. This book should prove of interest to practitioners in any field who may require the analysis, association, recognition or processing of signals, and undergraduate students of signal processing.
Markov models are extremely useful as a general, widely applicable tool for many areas in statistical pattern recognition. This unique text/reference places the formalism of Markov chain and hidden Markov models at the very center of its examination of current pattern recognition systems, demonstrating how the models can be used in a range of different applications. Thoroughly revised and expanded, this new edition now includes a more detailed treatment of the EM algorithm, a description of an efficient approximate Viterbi-training procedure, a theoretical derivation of the perplexity measure, and coverage of multi-pass decoding based on "n"-best search. Supporting the discussion of the theoretical foundations of Markov modeling, special emphasis is also placed on practical algorithmic solutions. Topics and features: introduces the formal framework for Markov models, describing hidden Markov models and Markov chain models, also known as n-gram models; covers the robust handling of probability quantities, which are omnipresent when dealing with these statistical methods; presents methods for the configuration of hidden Markov models for specific application areas, explaining the estimation of the model parameters; describes important methods for efficient processing of Markov models, and the adaptation of the models to different tasks; examines algorithms for searching within the complex solution spaces that result from the joint application of Markov chain and hidden Markov models; reviews key applications of Markov models in automatic speech recognition, character and handwriting recognition, and the analysis of biological sequences. Researchers, practitioners, and graduate students of pattern recognition will all find this book to be invaluable in aiding their understanding of the application of statistical methods in this area.
This book covers virtually all aspects of image formation in medical imaging, including systems based on ionizing radiation (x-rays, gamma rays) and non-ionizing techniques (ultrasound, optical, thermal, magnetic resonance, and magnetic particle imaging) alike. In addition, it discusses the development and application of computer-aided detection and diagnosis (CAD) systems in medical imaging. Also there will be a special track on computer-aided diagnosis on COVID-19 by CT and X-rays images. Given its coverage, the book provides both a forum and valuable resource for researchers involved in image formation, experimental methods, image performance, segmentation, pattern recognition, feature extraction, classifier design, machine learning / deep learning, radiomics, CAD workstation design, human-computer interaction, databases, and performance evaluation.
Computational geometry as an area of research in its own right emerged in the early seventies of this century. Right from the beginning, it was obvious that strong connections of various kinds exist to questions studied in the considerably older field of combinatorial geometry. For example, the combinatorial structure of a geometric problem usually decides which algorithmic method solves the problem most efficiently. Furthermore, the analysis of an algorithm often requires a great deal of combinatorial knowledge. As it turns out, however, the connection between the two research areas commonly referred to as computa tional geometry and combinatorial geometry is not as lop-sided as it appears. Indeed, the interest in computational issues in geometry gives a new and con structive direction to the combinatorial study of geometry. It is the intention of this book to demonstrate that computational and com binatorial investigations in geometry are doomed to profit from each other. To reach this goal, I designed this book to consist of three parts, acorn binatorial part, a computational part, and one that presents applications of the results of the first two parts. The choice of the topics covered in this book was guided by my attempt to describe the most fundamental algorithms in computational geometry that have an interesting combinatorial structure. In this early stage geometric transforms played an important role as they reveal connections between seemingly unrelated problems and thus help to structure the field."
A long long time ago, echoing philosophical and aesthetic principles that existed since antiquity, William of Ockham enounced the principle of parsimony, better known today as Ockham's razor: "Entities should not be multiplied without neces sity. " This principle enabled scientists to select the "best" physical laws and theories to explain the workings of the Universe and continued to guide scienti?c research, leadingtobeautifulresultsliketheminimaldescriptionlength approachtostatistical inference and the related Kolmogorov complexity approach to pattern recognition. However, notions of complexity and description length are subjective concepts anddependonthelanguage"spoken"whenpresentingideasandresults. The?eldof sparse representations, that recently underwent a Big Bang like expansion, explic itly deals with the Yin Yang interplay between the parsimony of descriptions and the "language" or "dictionary" used in them, and it became an extremely exciting area of investigation. It already yielded a rich crop of mathematically pleasing, deep and beautiful results that quickly translated into a wealth of practical engineering applications. You are holding in your hands the ?rst guide book to Sparseland, and I am sure you'll ?nd in it both familiar and new landscapes to see and admire, as well as ex cellent pointers that will help you ?nd further valuable treasures. Enjoy the journey to Sparseland! Haifa, Israel, December 2009 Alfred M. Bruckstein vii Preface This book was originally written to serve as the material for an advanced one semester (fourteen 2 hour lectures) graduate course for engineering students at the Technion, Israel.
This book covers a large set of methods in the field of Artificial Intelligence - Deep Learning applied to real-world problems. The fundamentals of the Deep Learning approach and different types of Deep Neural Networks (DNNs) are first summarized in this book, which offers a comprehensive preamble for further problem-oriented chapters. The most interesting and open problems of machine learning in the framework of Deep Learning are discussed in this book and solutions are proposed. This book illustrates how to implement the zero-shot learning with Deep Neural Network Classifiers, which require a large amount of training data. The lack of annotated training data naturally pushes the researchers to implement low supervision algorithms. Metric learning is a long-term research but in the framework of Deep Learning approaches, it gets freshness and originality. Fine-grained classification with a low inter-class variability is a difficult problem for any classification tasks. This book presents how it is solved, by using different modalities and attention mechanisms in 3D convolutional networks. Researchers focused on Machine Learning, Deep learning, Multimedia and Computer Vision will want to buy this book. Advanced level students studying computer science within these topic areas will also find this book useful.
This volume gathers papers presented at the Workshop on Computational Diffusion MRI (CDMRI 2019), held under the auspices of the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), which took place in Shenzhen, China on October 17, 2019. This book presents the latest advances in the rapidly expanding field of diffusion MRI. It shares new perspectives on the latest research challenges for those currently working in the field, but also offers a valuable starting point for anyone interested in learning about computational techniques in diffusion MRI. The book includes rigorous mathematical derivations, a wealth of rich, full-colour visualisations and extensive clinically relevant results. As such, it will be of interest to researchers and practitioners in the fields of computer science, MRI physics and applied mathematics. Readers will find contributions covering a broad range of topics, from the mathematical foundations of the diffusion process and signal generation, to new computational methods and estimation techniques for the in vivo recovery of microstructural and connectivity features, as well as diffusion-relaxometry and frontline applications in research and clinical practice. This edition includes invited works from high-profile researchers with a specific focus on three new and important topics that are gaining momentum within the diffusion MRI community, including diffusion MRI signal acquisition and processing strategies, machine learning for diffusion MRI, and diffusion MRI outside the brain and clinical applications.
This book provides the tools to enhance the precision, automation and intelligence of modern CNC machining systems. Based on a detailed description of the technical foundations of the machining monitoring system, it develops the general idea of design and implementation of smart machining monitoring systems, focusing on the tool condition monitoring system. The book is structured in two parts. Part I discusses the fundamentals of machining systems, including modeling of machining processes, mathematical basics of condition monitoring and the framework of TCM from a machine learning perspective. Part II is then focused on the applications of these theories. It explains sensory signal processing and feature extraction, as well as the cyber-physical system of the smart machining system. Its utilisation of numerous illustrations and diagrams explain the ideas presented in a clear way, making this book a valuable reference for researchers, graduate students and engineers alike.
MPEG-4 is the multimedia standard for combining interactivity, natural and synthetic digital video, audio and computer-graphics. Typical applications are: internet, video conferencing, mobile videophones, multimedia cooperative work, teleteaching and games. With MPEG-4 the next step from block-based video (ISO/IEC MPEG-1, MPEG-2, CCITT H.261, ITU-T H.263) to arbitrarily-shaped visual objects is taken. This significant step demands a new methodology for system analysis and design to meet the considerably higher flexibility of MPEG-4. Motion estimation is a central part of MPEG-1/2/4 and H.261/H.263 video compression standards and has attracted much attention in research and industry, for the following reasons: it is computationally the most demanding algorithm of a video encoder (about 60-80% of the total computation time), it has a high impact on the visual quality of a video encoder, and it is not standardized, thus being open to competition. Algorithms, Complexity Analysis, and VLSI Architectures for MPEG-4 Motion Estimation covers in detail every single step in the design of a MPEG-1/2/4 or H.261/H.263 compliant video encoder: Fast motion estimation algorithms Complexity analysis tools Detailed complexity analysis of a software implementation of MPEG-4 video Complexity and visual quality analysis of fast motion estimation algorithms within MPEG-4 Design space on motion estimation VLSI architectures Detailed VLSI design examples of (1) a high throughput and (2) a low-power MPEG-4 motion estimator. Algorithms, Complexity Analysis and VLSI Architectures for MPEG-4 Motion Estimation is an important introduction to numerous algorithmic, architectural and system design aspects of the multimedia standard MPEG-4. As such, all researchers, students and practitioners working in image processing, video coding or system and VLSI design will find this book of interest. |
You may like...
So Lyk 'n Vrou - My 40 Jaar Van Hel Saam…
Ilse Verster
Paperback
(1)
Computer Games as Educational and…
Maria Manuela Cruz Cunha, Vitor Hugo Costa Carvalho, …
Hardcover
R4,584
Discovery Miles 45 840
|