![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Image processing
- Centers artificial intelligence as a pathway for media studies students, scholars and practitioners to navigate the broad terrain of software practice. - Examines the impact of software on everyday life as it traces the industrial development and migrations of AI and the connectedness of play to broader cultural, social and economic forces. - Connects history and theory to practice through a number of illustrative, culturally relevant media objects and case studies that will be familiar and engaging to many students. - With its focus on applied artificial intelligence in popular and public culture, it bridges the fields of software studies, science and technology studies, and video game studies.
Image technology is a continually evolving field with various
applications such as image processing and analysis, biometrics,
pattern recognition, object tracking, remote sensing, medicine
diagnoses and multimedia. Significant progress has been made in the
level of interest in image morphology, neural networks, full color
image processing, image data compression, image recognition, and
knowledge -based image analysis systems.
This indispensable text introduces the foundations of three-dimensional computer vision and describes recent contributions to the field. Fully revised and updated, this much-anticipated new edition reviews a range of triangulation-based methods, including linear and bundle adjustment based approaches to scene reconstruction and camera calibration, stereo vision, point cloud segmentation, and pose estimation of rigid, articulated, and flexible objects. Also covered are intensity-based techniques that evaluate the pixel grey values in the image to infer three-dimensional scene structure, and point spread function based approaches that exploit the effect of the optical system. The text shows how methods which integrate these concepts are able to increase reconstruction accuracy and robustness, describing applications in industrial quality inspection and metrology, human-robot interaction, and remote sensing.
In this book, three main notions will be used in the editors search of improvements in various areas of computer graphics: Artificial Intelligence, Viewpoint Complexity and Human Intelligence. Several Artificial Intelligence techniques are used in presented intelligent scene modelers, mainly declarative ones. Among them, the mostly used techniques are Expert systems, Constraint Satisfaction Problem resolution and Machine-learning. The notion of viewpoint complexity, that is complexity of a scene seen from a given viewpoint, will be used in improvement proposals for a lot of computer graphics problems like scene understanding, virtual world exploration, image-based modeling and rendering, ray tracing and radiosity. Very often, viewpoint complexity is used in conjunction with Artificial Intelligence techniques like Heuristic search and Problem resolution. The notions of artificial Intelligence and Viewpoint Complexity may help to automatically resolve a big number of computer graphics problems. However, there are special situations where is required to find a particular solution for each situation. In such a case, human intelligence has to replace, or to be combined with, artificial intelligence. Such cases, and proposed solutions are also presented in this book.
This book presents essential algorithms for the image processing pipeline of photo-printers and accompanying software tools, offering an exposition of multiple image enhancement algorithms, smart aspect-ratio changing techniques for borderless printing and approaches for non-standard printing modes. All the techniques described are content-adaptive and operate in an automatic mode thanks to machine learning reasoning or ingenious heuristics. The first part includes algorithms, for example, red-eye correction and compression artefacts reduction, that can be applied in any photo processing application, while the second part focuses specifically on printing devices, e.g. eco-friendly and anaglyph printing. The majority of the techniques presented have a low computational complexity because they were initially designed for integration in system-on-chip. The book reflects the authors' practical experience in algorithm development for industrial R&D.
Principles of Visual Information Retrieval introduces the basic concepts and techniques in VIR and develops a foundation that can be used for further research and study.Divided into 2 parts, the first part describes the fundamental principles. A chapter is devoted to each of the main features of VIR, such as colour, texture and shape-based search. There is coverage of search techniques for time-based image sequences or videos, and an overview of how to combine all the basic features described and integrate context into the search process.The second part looks at advanced topics such as multimedia query, specification, visual learning and semantics, and offers state-of-the-art coverage that is not available in any other book on the market.This book will be essential reading for researchers in VIR, and for final year undergraduate and postgraduate students on courses such as Multimedia Information Retrieval, Multimedia Databases, Computer Vision and Pattern Recognition.
Advancements in digital sensor technology, digital image analysis techniques, as well as computer software and hardware have brought together the fields of computer vision and photogrammetry, which are now converging towards sharing, to a great extent, objectives and algorithms. The potential for mutual benefits by the close collaboration and interaction of these two disciplines is great, as photogrammetric know-how can be aided by the most recent image analysis developments in computer vision, while modern quantitative photogrammetric approaches can support computer vision activities. Devising methodologies for automating the extraction of man-made objects (e.g. buildings, roads) from digital aerial or satellite imagery is an application where this cooperation and mutual support is already reaping benefits. The valuable spatial information collected using these interdisciplinary techniques is of improved qualitative and quantitative accuracy. This book offers a comprehensive selection of high-quality and in-depth contributions from world-wide leading research institutions, treating theoretical as well as implementational issues, and representing the state-of-the-art on this subject among the photogrammetric and computer vision communities.
Soft Computing Approach to Pattern Classification and Object Recognition establishes an innovative, unified approach to supervised pattern classification and model-based occluded object recognition. The book also surveys various soft computing tools, fuzzy relational calculus (FRC), genetic algorithm (GA) and multilayer perceptron (MLP) to provide a strong foundation for the reader. The supervised approach to pattern classification and model-based approach to occluded object recognition are treated in one framework , one based on either a conventional interpretation or a new interpretation of multidimensional fuzzy implication (MFI) and a novel notion of fuzzy pattern vector (FPV). By combining practice and theory, a completely independent design methodology was developed in conjunction with this supervised approach on a unified framework, and then tested thoroughly against both synthetic and real-life data. In the field of soft computing, such an application-oriented design study is unique in nature. The monograph essentially mimics the cognitive process of human decision making, and carries a message of perceptual integrity in representational diversity. Soft Computing Approach to Pattern Classification and Object Recognition is intended for researchers in the area of pattern classification and computer vision. Other academics and practitioners will also find the book valuable.
Due to the rapid increase in readily available computing power, a corre sponding increase in the complexity of problems being tackled has occurred in the field of systems as a whole. A plethora of new methods which can be used on the problems has also arisen with a constant desire to deal with more and more difficult applications. Unfortunately by increasing the ac curacy in models employed along with the use of appropriate algorithms with related features, the resultant necessary computations can often be of very high dimension. This brings with it a whole new breed of problem which has come to be known as "The Curse of Dimensionality" . The expression "Curse of Dimensionality" can be in fact traced back to Richard Bellman in the 1960's. However, it is only in the last few years that it has taken on a widespread practical significance although the term di mensionality does not have a unique precise meaning and is being used in a slightly different way in the context of algorithmic and stochastic complex ity theory or in every day engineering. In principle the dimensionality of a problem depends on three factors: on the engineering system (subject), on the concrete task to be solved and on the available resources. A system is of high dimension if it contains a lot of elements/variables and/or the rela tionship/connection between the elements/variables is complicated."
Advances in Quantum Chemistry presents surveys of current
developments in this rapidly developing field that falls between
the historically established areas of mathematics, physics,
chemistry, and biology. With invited reviews written by leading
international researchers, each presenting new results, it provides
a single vehicle for following progress in this interdisciplinary
area.
Security and privacy are paramount concerns in information processing systems, which are vital to business, government and military operations and, indeed, society itself. Meanwhile, the expansion of the Internet and its convergence with telecommunication networks are providing incredible connectivity, myriad applications and, of course, new threats. Data and Applications Security XVII: Status and Prospects
describes original research results, practical experiences and
innovative ideas, all focused on maintaining security and privacy
in information processing systems and applications that pervade
cyberspace. The areas of coverage include: This book is the seventeenth volume in the series produced by the International Federation for Information Processing (IFIP) Working Group 11.3 on Data and Applications Security. It presents a selection of twenty-six updated and edited papers from the Seventeenth Annual IFIP TC11 / WG11.3 Working Conference on Data and Applications Security held at Estes Park, Colorado, USA in August 2003, together with a report on the conference keynote speech and a summary of the conference panel. The contents demonstrate the richness and vitality of the discipline, and other directions for future research in data and applications security. Data and Applications Security XVII: Status and Prospects is an invaluable resource for information assurance researchers, faculty members and graduate students, as well as for individuals engaged in research and development in the information technology sector.
The Distinguished Dissertation Series is published on behalf of the Conference of Professors and Heads of Computing and the British Computer Society, who annually select the best British PhD dissertations in computer science for publication. The dissertations are selected on behalf of the CPHC by a panel of eight academics. Each dissertation chosen makes a noteworthy contribution to the subject and reaches a high standard of exposition, placing all results clearly in the context of computer science as a whole. In this way computer scientists with significantly different interests are able to grasp the essentials - or even find a means of entry - to an unfamiliar research topic. This book investigates how information contained in multiple, overlapping images of a scene may be combined to produce images of superior quality. This offers possibilities such as noise reduction, extended field of view, blur removal, increased spatial resolution and improved dynamic range. Potential applications cover fields as diverse as forensic video restoration, remote sensing, video compression and digital video editing. The book covers two aspects that have attracted particular attention in recent years: image mosaicing, whereby multiple images are aligned to produce a large composite; and super-resolution, which permits restoration at an increased resolution of poor quality video sequences by modelling and removing imaging degradations including noise, blur and spacial-sampling. It contains a comprehensive coverage and analysis of existing techniques, and describes in detail novel, powerful and automatic algorithms (based on a robust, statistical framework) for applying mosaicing and super-resolution. The algorithms may be implemented directly from the descriptions given here. A particular feature of the techniques is that it is not necessary to know the camera parameters (such as position and focal length) in order to apply them. Throughout the book, examples are given on real image sequences, covering a variety of applications including: the separation of latent marks in forensic images; the automatic creation of 360 panoramic mosaics; and super-resolution restoration of various scenes, text, and faces in lw-quality video.
Image segmentation consists of dividing an image domain into disjoint regions according to a characterization of the image within or in-between the regions. Therefore, segmenting an image is to divide its domain into relevant components. The efficient solution of the key problems in image segmentation promises to enable a rich array of useful applications. The current major application areas include robotics, medical image analysis, remote sensing, scene understanding, and image database retrieval. The subject of this book is image segmentation by variational methods with a focus on formulations which use closed regular plane curves to define the segmentation regions and on a level set implementation of the corresponding active curve evolution algorithms. Each method is developed from an objective functional which embeds constraints on both the image domain partition of the segmentation and the image data within or in-between the partition regions. The necessary conditions to optimize the objective functional are then derived and solved numerically. The book covers, within the active curve and level set formalism, the basic two-region segmentation methods, multiregion extensions, region merging, image modeling, and motion based segmentation. To treat various important classes of images, modeling investigates several parametric distributions such as the Gaussian, Gamma, Weibull, and Wishart. It also investigates non-parametric models. In motion segmentation, both optical flow and the movement of real three-dimensional objects are studied.
Basics of Game Design is for anyone wanting to become a professional game designer. Focusing on creating the game mechanics for data-driven games, it covers role-playing, real-time strategy, first-person shooter, simulation, and other games. Written by a 25-year veteran of the game industry, the guide offers detailed explanations of how to design the data sets used to resolve game play for moving, combat, solving puzzles, interacting with NPCs, managing inventory, and much more. Advice on developing stories for games, building maps and levels, and designing the graphical user interface is also included.
Signal processing applications have burgeoned in the past decade.
During the same time, signal processing techniques have matured
rapidly and now include tools from many areas of mathematics,
computer science, physics, and engineering. This trend will
continue as many new signal processing applications are opening up
in consumer products and communications systems.
This both accessible and exhaustive book will help to improve modeling of attention and to inspire innovations in industry. It introduces the study of attention and focuses on attention modeling, addressing such themes as saliency models, signal detection and different types of signals, as well as real-life applications. The book is truly multi-disciplinary, collating work from psychology, neuroscience, engineering and computer science, amongst other disciplines. What is attention? We all pay attention every single moment of our lives. Attention is how the brain selects and prioritizes information. The study of attention has become incredibly complex and divided: this timely volume assists the reader by drawing together work on the computational aspects of attention from across the disciplines. Those working in the field as engineers will benefit from this book's introduction to the psychological and biological approaches to attention, and neuroscientists can learn about engineering work on attention. The work features practical reviews and chapters that are quick and easy to read, as well as chapters which present deeper, more complex knowledge. Everyone whose work relates to human perception, to image, audio and video processing will find something of value in this book, from students to researchers and those in industry.
This book provides detailed practical guidelines on how to develop an efficient pathological brain detection system, reflecting the latest advances in the computer-aided diagnosis of structural magnetic resonance brain images. Matlab codes are provided for most of the functions described. In addition, the book equips readers to easily develop the pathological brain detection system further on their own and apply the technologies to other research fields, such as Alzheimer's detection, multiple sclerosis detection, etc.
Quick sketching is the best technique you can use to stay finely tuned and to keep those creative juices flowing. To keep your sense of observation heightened, and to sharpen your hand-eye coordination, an animator needs to constantly draw and sketch. Quick Sketching with Ron Husband offers instruction to quick sketching and all its techniques. From observing positive and negative space and learning to recognize simple shapes in complex forms to action analysis and using line of action, this Disney legend teaches you how to sketch using all these components, and how to do it in a matter of seconds. On top of instruction and advice, you'll also see Ron's portfolio of select art representing his growth as an artist throughout the years. Watch his drawings as he grows from a young, talented artist, to a true Disney animator. Follow him as he goes around the world and sketches flamenco dancers, football players, bakers, joggers, lions, tigers, anyone, and anything. As if instruction and inspiration in one place weren't enough, you'll find a sketchbook included, so you can flip from Ron's techniques and work on perfecting basic shapes. Or take your book on the road, read Ron's advice, sketch away, capture the world around you.
In Computer Graphics, the use of intelligent techniques started more recently than in other research areas. However, during these last two decades, the use of intelligent Computer Graphics techniques is growing up year after year and more and more interesting techniques are presented in this area. The purpose of this volume is to present current work of the Intelligent Computer Graphics community, a community growing up year after year. This volume is a kind of continuation of the previously published Springer volumes "Artificial Intelligence Techniques for Computer Graphics" (2008), "Intelligent Computer Graphics 2009" (2009), "Intelligent Computer Graphics 2010" (2010) and "Intelligent Computer Graphics 2011" (2011). Usually, this kind of volume contains, every year, selected extended papers from the corresponding 3IA Conference of the year. However, the current volume is made from directly reviewed and selected papers, submitted for publication in the volume "Intelligent Computer Graphics 2012". This year papers are particularly exciting and concern areas like plant modelling, text-to-scene systems, information visualization, computer-aided geometric design, artificial life, computer games, realistic rendering and many other very important themes.
Multimedia Signals and Systems is an essential text for
professional and academic researchers and students in the field of
multimedia.
The area of adaptive systems, which encompasses recursive identification, adaptive control, filtering, and signal processing, has been one of the most active areas of the past decade. Since adaptive controllers are fundamentally nonlinear controllers which are applied to nominally linear, possibly stochastic and time-varying systems, their theoretical analysis is usually very difficult. Nevertheless, over the past decade much fundamental progress has been made on some key questions concerning their stability, convergence, performance, and robustness. Moreover, adaptive controllers have been successfully employed in numerous practical applications, and have even entered the marketplace.
A smart camera is an integrated machine vision system which, in addition to image capture circuitry, includes a processor, which can extract information from images without need for an external processing unit, and interface devices used to make results available to other devices. This book provides content on smart cameras for an interdisciplinary audience of professionals and students in embedded systems, image processing, and camera technology. It serves as a self-contained, single-source reference for material otherwise found only in sources such as conference proceedings, journal articles, or product data sheets. Coverage includes the 50 year chronology of smart cameras, their technical evolution, the state-of-the art, and numerous applications, such as surveillance and monitoring, robotics, and transportation.
This volume offers a valuable starting point for anyone interested in learning computational diffusion MRI and mathematical methods for brain connectivity, while also sharing new perspectives and insights on the latest research challenges for those currently working in the field. Over the last decade, interest in diffusion MRI has virtually exploded. The technique provides unique insights into the microstructure of living tissue and enables in-vivo connectivity mapping of the brain. Computational techniques are key to the continued success and development of diffusion MRI and to its widespread transfer into the clinic, while new processing methods are essential to addressing issues at each stage of the diffusion MRI pipeline: acquisition, reconstruction, modeling and model fitting, image processing, fiber tracking, connectivity mapping, visualization, group studies and inference. These papers from the 2016 MICCAI Workshop "Computational Diffusion MRI" - which was intended to provide a snapshot of the latest developments within the highly active and growing field of diffusion MR - cover a wide range of topics, from fundamental theoretical work on mathematical modeling, to the development and evaluation of robust algorithms and applications in neuroscientific studies and clinical practice. The contributions include rigorous mathematical derivations, a wealth of rich, full-color visualizations, and biologically or clinically relevant results. As such, they will be of interest to researchers and practitioners in the fields of computer science, MR physics, and applied mathematics. |
![]() ![]() You may like...
|