![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Image processing
Covers advances in the field of computer techniques and algorithms in digital signal processing.
This book deals with two fundamental issues in the semiotics of the image. The first is the relationship between image and observer: how does one look at an image? To answer this question, this book sets out to transpose the theory of enunciation formulated in linguistics over to the visual field. It also aims to clarify the gains made in contemporary visual semiotics relative to the semiology of Roland Barthes and Emile Benveniste. The second issue addressed is the relation between the forces, forms and materiality of the images. How do different physical mediums (pictorial, photographic and digital) influence visual forms? How does materiality affect the generativity of forms? On the forces within the images, the book addresses the philosophical thought of Gilles Deleuze and Rene Thom as well as the experiment of Aby Warburg's Atlas Mnemosyne. The theories discussed in the book are tested on a variety of corpora for analysis, including both paintings and photographs, taken from traditional as well as contemporary sources in a variety of social sectors (arts and sciences). Finally, semiotic methodology is contrasted with the computational analysis of large collections of images (Big Data), such as the "Media Visualization" analyses proposed by Lev Manovich and Cultural Analytics in the field of Computer Science to evaluate the impact of automatic analysis of visual forms on Digital Art History and more generally on the image sciences.
Advancements in digital sensor technology, digital image analysis techniques, as well as computer software and hardware have brought together the fields of computer vision and photogrammetry, which are now converging towards sharing, to a great extent, objectives and algorithms. The potential for mutual benefits by the close collaboration and interaction of these two disciplines is great, as photogrammetric know-how can be aided by the most recent image analysis developments in computer vision, while modern quantitative photogrammetric approaches can support computer vision activities. Devising methodologies for automating the extraction of man-made objects (e.g. buildings, roads) from digital aerial or satellite imagery is an application where this cooperation and mutual support is already reaping benefits. The valuable spatial information collected using these interdisciplinary techniques is of improved qualitative and quantitative accuracy. This book offers a comprehensive selection of high-quality and in-depth contributions from world-wide leading research institutions, treating theoretical as well as implementational issues, and representing the state-of-the-art on this subject among the photogrammetric and computer vision communities.
Due to the rapid increase in readily available computing power, a corre sponding increase in the complexity of problems being tackled has occurred in the field of systems as a whole. A plethora of new methods which can be used on the problems has also arisen with a constant desire to deal with more and more difficult applications. Unfortunately by increasing the ac curacy in models employed along with the use of appropriate algorithms with related features, the resultant necessary computations can often be of very high dimension. This brings with it a whole new breed of problem which has come to be known as "The Curse of Dimensionality" . The expression "Curse of Dimensionality" can be in fact traced back to Richard Bellman in the 1960's. However, it is only in the last few years that it has taken on a widespread practical significance although the term di mensionality does not have a unique precise meaning and is being used in a slightly different way in the context of algorithmic and stochastic complex ity theory or in every day engineering. In principle the dimensionality of a problem depends on three factors: on the engineering system (subject), on the concrete task to be solved and on the available resources. A system is of high dimension if it contains a lot of elements/variables and/or the rela tionship/connection between the elements/variables is complicated."
This book presents essential algorithms for the image processing pipeline of photo-printers and accompanying software tools, offering an exposition of multiple image enhancement algorithms, smart aspect-ratio changing techniques for borderless printing and approaches for non-standard printing modes. All the techniques described are content-adaptive and operate in an automatic mode thanks to machine learning reasoning or ingenious heuristics. The first part includes algorithms, for example, red-eye correction and compression artefacts reduction, that can be applied in any photo processing application, while the second part focuses specifically on printing devices, e.g. eco-friendly and anaglyph printing. The majority of the techniques presented have a low computational complexity because they were initially designed for integration in system-on-chip. The book reflects the authors' practical experience in algorithm development for industrial R&D.
Soft Computing Approach to Pattern Classification and Object Recognition establishes an innovative, unified approach to supervised pattern classification and model-based occluded object recognition. The book also surveys various soft computing tools, fuzzy relational calculus (FRC), genetic algorithm (GA) and multilayer perceptron (MLP) to provide a strong foundation for the reader. The supervised approach to pattern classification and model-based approach to occluded object recognition are treated in one framework , one based on either a conventional interpretation or a new interpretation of multidimensional fuzzy implication (MFI) and a novel notion of fuzzy pattern vector (FPV). By combining practice and theory, a completely independent design methodology was developed in conjunction with this supervised approach on a unified framework, and then tested thoroughly against both synthetic and real-life data. In the field of soft computing, such an application-oriented design study is unique in nature. The monograph essentially mimics the cognitive process of human decision making, and carries a message of perceptual integrity in representational diversity. Soft Computing Approach to Pattern Classification and Object Recognition is intended for researchers in the area of pattern classification and computer vision. Other academics and practitioners will also find the book valuable.
This book constitutes the refereed proceedings of the 10th IFIP TC 12 International Conference on Intelligent Information Processing, IIP 2018, held in Nanning, China, in October 2018. The 37 full papers and 8 short papers presented were carefully reviewed and selected from 80 submissions. They are organized in topical sections on machine learning, deep learning, multi-agent systems, neural computing and swarm intelligence, natural language processing, recommendation systems, social computing, business intelligence and security, pattern recognition, and image understanding.
Security and privacy are paramount concerns in information processing systems, which are vital to business, government and military operations and, indeed, society itself. Meanwhile, the expansion of the Internet and its convergence with telecommunication networks are providing incredible connectivity, myriad applications and, of course, new threats. Data and Applications Security XVII: Status and Prospects
describes original research results, practical experiences and
innovative ideas, all focused on maintaining security and privacy
in information processing systems and applications that pervade
cyberspace. The areas of coverage include: This book is the seventeenth volume in the series produced by the International Federation for Information Processing (IFIP) Working Group 11.3 on Data and Applications Security. It presents a selection of twenty-six updated and edited papers from the Seventeenth Annual IFIP TC11 / WG11.3 Working Conference on Data and Applications Security held at Estes Park, Colorado, USA in August 2003, together with a report on the conference keynote speech and a summary of the conference panel. The contents demonstrate the richness and vitality of the discipline, and other directions for future research in data and applications security. Data and Applications Security XVII: Status and Prospects is an invaluable resource for information assurance researchers, faculty members and graduate students, as well as for individuals engaged in research and development in the information technology sector.
The Distinguished Dissertation Series is published on behalf of the Conference of Professors and Heads of Computing and the British Computer Society, who annually select the best British PhD dissertations in computer science for publication. The dissertations are selected on behalf of the CPHC by a panel of eight academics. Each dissertation chosen makes a noteworthy contribution to the subject and reaches a high standard of exposition, placing all results clearly in the context of computer science as a whole. In this way computer scientists with significantly different interests are able to grasp the essentials - or even find a means of entry - to an unfamiliar research topic. This book investigates how information contained in multiple, overlapping images of a scene may be combined to produce images of superior quality. This offers possibilities such as noise reduction, extended field of view, blur removal, increased spatial resolution and improved dynamic range. Potential applications cover fields as diverse as forensic video restoration, remote sensing, video compression and digital video editing. The book covers two aspects that have attracted particular attention in recent years: image mosaicing, whereby multiple images are aligned to produce a large composite; and super-resolution, which permits restoration at an increased resolution of poor quality video sequences by modelling and removing imaging degradations including noise, blur and spacial-sampling. It contains a comprehensive coverage and analysis of existing techniques, and describes in detail novel, powerful and automatic algorithms (based on a robust, statistical framework) for applying mosaicing and super-resolution. The algorithms may be implemented directly from the descriptions given here. A particular feature of the techniques is that it is not necessary to know the camera parameters (such as position and focal length) in order to apply them. Throughout the book, examples are given on real image sequences, covering a variety of applications including: the separation of latent marks in forensic images; the automatic creation of 360 panoramic mosaics; and super-resolution restoration of various scenes, text, and faces in lw-quality video.
Signal processing applications have burgeoned in the past decade.
During the same time, signal processing techniques have matured
rapidly and now include tools from many areas of mathematics,
computer science, physics, and engineering. This trend will
continue as many new signal processing applications are opening up
in consumer products and communications systems.
Image segmentation consists of dividing an image domain into disjoint regions according to a characterization of the image within or in-between the regions. Therefore, segmenting an image is to divide its domain into relevant components. The efficient solution of the key problems in image segmentation promises to enable a rich array of useful applications. The current major application areas include robotics, medical image analysis, remote sensing, scene understanding, and image database retrieval. The subject of this book is image segmentation by variational methods with a focus on formulations which use closed regular plane curves to define the segmentation regions and on a level set implementation of the corresponding active curve evolution algorithms. Each method is developed from an objective functional which embeds constraints on both the image domain partition of the segmentation and the image data within or in-between the partition regions. The necessary conditions to optimize the objective functional are then derived and solved numerically. The book covers, within the active curve and level set formalism, the basic two-region segmentation methods, multiregion extensions, region merging, image modeling, and motion based segmentation. To treat various important classes of images, modeling investigates several parametric distributions such as the Gaussian, Gamma, Weibull, and Wishart. It also investigates non-parametric models. In motion segmentation, both optical flow and the movement of real three-dimensional objects are studied.
Basics of Game Design is for anyone wanting to become a professional game designer. Focusing on creating the game mechanics for data-driven games, it covers role-playing, real-time strategy, first-person shooter, simulation, and other games. Written by a 25-year veteran of the game industry, the guide offers detailed explanations of how to design the data sets used to resolve game play for moving, combat, solving puzzles, interacting with NPCs, managing inventory, and much more. Advice on developing stories for games, building maps and levels, and designing the graphical user interface is also included.
Quick sketching is the best technique you can use to stay finely tuned and to keep those creative juices flowing. To keep your sense of observation heightened, and to sharpen your hand-eye coordination, an animator needs to constantly draw and sketch. Quick Sketching with Ron Husband offers instruction to quick sketching and all its techniques. From observing positive and negative space and learning to recognize simple shapes in complex forms to action analysis and using line of action, this Disney legend teaches you how to sketch using all these components, and how to do it in a matter of seconds. On top of instruction and advice, you'll also see Ron's portfolio of select art representing his growth as an artist throughout the years. Watch his drawings as he grows from a young, talented artist, to a true Disney animator. Follow him as he goes around the world and sketches flamenco dancers, football players, bakers, joggers, lions, tigers, anyone, and anything. As if instruction and inspiration in one place weren't enough, you'll find a sketchbook included, so you can flip from Ron's techniques and work on perfecting basic shapes. Or take your book on the road, read Ron's advice, sketch away, capture the world around you.
Multimedia Signals and Systems is an essential text for
professional and academic researchers and students in the field of
multimedia.
This both accessible and exhaustive book will help to improve modeling of attention and to inspire innovations in industry. It introduces the study of attention and focuses on attention modeling, addressing such themes as saliency models, signal detection and different types of signals, as well as real-life applications. The book is truly multi-disciplinary, collating work from psychology, neuroscience, engineering and computer science, amongst other disciplines. What is attention? We all pay attention every single moment of our lives. Attention is how the brain selects and prioritizes information. The study of attention has become incredibly complex and divided: this timely volume assists the reader by drawing together work on the computational aspects of attention from across the disciplines. Those working in the field as engineers will benefit from this book's introduction to the psychological and biological approaches to attention, and neuroscientists can learn about engineering work on attention. The work features practical reviews and chapters that are quick and easy to read, as well as chapters which present deeper, more complex knowledge. Everyone whose work relates to human perception, to image, audio and video processing will find something of value in this book, from students to researchers and those in industry.
The area of adaptive systems, which encompasses recursive identification, adaptive control, filtering, and signal processing, has been one of the most active areas of the past decade. Since adaptive controllers are fundamentally nonlinear controllers which are applied to nominally linear, possibly stochastic and time-varying systems, their theoretical analysis is usually very difficult. Nevertheless, over the past decade much fundamental progress has been made on some key questions concerning their stability, convergence, performance, and robustness. Moreover, adaptive controllers have been successfully employed in numerous practical applications, and have even entered the marketplace.
In Computer Graphics, the use of intelligent techniques started more recently than in other research areas. However, during these last two decades, the use of intelligent Computer Graphics techniques is growing up year after year and more and more interesting techniques are presented in this area. The purpose of this volume is to present current work of the Intelligent Computer Graphics community, a community growing up year after year. This volume is a kind of continuation of the previously published Springer volumes "Artificial Intelligence Techniques for Computer Graphics" (2008), "Intelligent Computer Graphics 2009" (2009), "Intelligent Computer Graphics 2010" (2010) and "Intelligent Computer Graphics 2011" (2011). Usually, this kind of volume contains, every year, selected extended papers from the corresponding 3IA Conference of the year. However, the current volume is made from directly reviewed and selected papers, submitted for publication in the volume "Intelligent Computer Graphics 2012". This year papers are particularly exciting and concern areas like plant modelling, text-to-scene systems, information visualization, computer-aided geometric design, artificial life, computer games, realistic rendering and many other very important themes.
This book provides detailed practical guidelines on how to develop an efficient pathological brain detection system, reflecting the latest advances in the computer-aided diagnosis of structural magnetic resonance brain images. Matlab codes are provided for most of the functions described. In addition, the book equips readers to easily develop the pathological brain detection system further on their own and apply the technologies to other research fields, such as Alzheimer's detection, multiple sclerosis detection, etc.
A smart camera is an integrated machine vision system which, in addition to image capture circuitry, includes a processor, which can extract information from images without need for an external processing unit, and interface devices used to make results available to other devices. This book provides content on smart cameras for an interdisciplinary audience of professionals and students in embedded systems, image processing, and camera technology. It serves as a self-contained, single-source reference for material otherwise found only in sources such as conference proceedings, journal articles, or product data sheets. Coverage includes the 50 year chronology of smart cameras, their technical evolution, the state-of-the art, and numerous applications, such as surveillance and monitoring, robotics, and transportation.
With a preface by Ton Kalker. Informed Watermarking is an essential tool for both academic and professional researchers working in the areas of multimedia security, information embedding, and communication. Theory and practice are linked, particularly in the area of multi-user communication. From the Preface: Watermarking has become a more mature discipline with proper foundation in both signal processing and information theory. We can truly say that we are in the era of "second generation" watermarking. This book is first in addressing watermarking problems in terms of second-generation insights. It provides a complete overview of the most important results on capacity and security. The Costa scheme, and in particular a simpler version of it, the Scalar Costa scheme, is studied in great detail. An important result of this book is that it is possible to approach the Shannon limit within a few decibels in a practical system. These results are verified on real-world data, not only the classical category of images, but also on chemical structure sets. Inspired by the work of Moulin and O'Sullivan, this book also addresses security aspects by studying AGWN attacks in terms of game theory. "The authors of Informed Watermarking give a well-written exposA(c) of how watermarking came of age, where we are now, and what to expect in the future. It is my expectation that this book will be a standard reference on second-generation watermarking for the years to come." Ton Kalker, Technische Universiteit Eindhoven
Video Object Extraction and Representation: Theory and Applications is an essential reference for electrical engineers working in video; computer scientists researching or building multimedia databases; video system designers; students of video processing; video technicians; and designers working in the graphic arts. In the coming years, the explosion of computer technology will enable a new form of digital media. Along with broadband Internet access and MPEG standards, this new media requires a computational infrastructure to allow users to grab and manipulate content. The book reviews relevant technologies and standards for content-based processing and their interrelations. Within this overview, the book focuses upon two problems at the heart of the algorithmic/computational infrastructure: video object extraction, or how to automatically package raw visual information by content; and video object representation, or how to automatically index and catalogue extracted content for browsing and retrieval. The book analyzes the designs of two novel, working systems for content-based extraction and representation in the support of MPEG-4 and MPEG-7 video standards, respectively. Features of the book include: Overview of MPEG standards; A working system for automatic video object segmentation; A working system for video object query by shape; Novel technology for a wide range of recognition problems; Overview of neural network and vision technologies Video Object Extraction and Representation: Theory and Applications will be of interest to research scientists and practitioners working in fields related to the topic. It may also be used as an advanced-level graduate text.
The book describes a system for visual surveillance using intelligent cameras. The camera uses robust techniques for detecting and tracking moving objects. The real time capture of the objects is then stored in the database. The tracking data stored in the database is analysed to study the camera view, detect and track objects, and study object behavior. These set of models provide a robust framework for coordinating the tracking of objects between overlapping and non-overlapping cameras, and recording the activity of objects detected by the system.
ICMCCA 2012 is the first International Conference on Multimedia Processing, Communication and Computing Applications and the theme of the Conference is chosen as 'Multimedia Processing and its Applications'. Multimedia processing has been an active research area contributing in many frontiers of today's science and technology. This book presents peer-reviewed quality papers on multimedia processing, which covers a very broad area of science and technology. The prime objective of the book is to familiarize readers with the latest scientific developments that are taking place in various fields of multimedia processing and is widely used in many disciplines such as Medical Diagnosis, Digital Forensic, Object Recognition, Image and Video Analysis, Robotics, Military, Automotive Industries, Surveillance and Security, Quality Inspection, etc. The book will assist the research community to get the insight of the overlapping works which are being carried out across the globe at many medical hospitals and institutions, defense labs, forensic labs, academic institutions, IT companies and security & surveillance domains. It also discusses latest state-of-the-art research problems and techniques and helps to encourage, motivate and introduce the budding researchers to a larger domain of multimedia.
The purpose of this volume is to present current work of the Intelligent Computer Graphics community, a community growing up year after year. This volume is a kind of continuation of the previously published Springer volume "Artificial Int- ligence Techniques for Computer Graphics". Nowadays, intelligent techniques are more and more used in Computer Graphics in order, not only to optimise the pr- essing time, but also to find more accurate solutions for a lot of Computer Gra- ics problems, than with traditional methods. What are intelligent techniques for Computer Graphics? Mainly, they are te- niques based on Artificial Intelligence. So, problem resolution (especially constraint satisfaction) techniques, as well as evolutionary techniques, are used in Declarative scene Modelling; heuristic search techniques, as well as strategy games techniques, are currently used in scene understanding and in virtual world exploration; multi-agent techniques and evolutionary algorithms are used in behavioural animation; and so on. However, even if in most cases the used intelligent techniques are due to Artificial - telligence, sometimes, simple human intelligence can find interesting solutions in cases where traditional Computer Graphics techniques, even combined with Artificial Intelligence ones, cannot propose any satisfactory solution. A good example of such a case is the one of scene understanding, in the case where several parts of the scene are impossible to access. |
![]() ![]() You may like...
Graphs and Discrete Dirichlet Spaces
Matthias Keller, Daniel Lenz, …
Hardcover
R4,289
Discovery Miles 42 890
Introduction to Nonparametric Statistics…
Thomas W. MacFarland, Jan M. Yates
Hardcover
R3,326
Discovery Miles 33 260
|