![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Image processing > General
Delivering MPEG-4 Based Audio-Visual Services investigates the different aspects of end-to-end multimedia services; content creation, server and service provider, network, and the end-user terminal. Part I provides a comprehensive introduction to digital video communications, MPEG standards, and technologies, and deals with system level issues including standardization and interoperability, user interaction, and the design of a distributed video server. Part II investigates the systems in the context of object-based multimedia services and presents a design for an object-based audio-visual terminal, some of these features having been adopted by the MPEG-4 Systems specification. The book goes on to study the requirements for a file format to represent object-based audio-visual content and the design of one such format. The design introduces new concepts such as direct streaming that are essential for scalable servers. The final part of the book examines the delivery of object-based multimedia presentations and gives optimal algorithms for multiplex-scheduling of object-based audio-visual presentations, showing that the audio-visual object scheduling problem is NP-complete in the strong sense. The problem of scheduling audio-visual objects is similar to the problem of sequencing jobs on a single machine. The book compares these problems and adapts job-sequencing results to audio-visual object scheduling, and provides optimal algorithms for scheduling presentations under resource constraints, such as bandwidth (network constraints) and buffer (terminal constraints). In addition, the book presents algorithms that minimize the resources required for scheduling presentations and the auxiliary capacity required to support interactivity in object-based audio-visual presentations. Delivering MPEG-4 Based Audio-Visual Services is essential reading for researchers and practitioners in the areas of multimedia systems engineering and multimedia computing, network professionals, service providers, and all scientists and technical managers interested in the most up-to-date MPEG standards and technologies.
Applications of Fractals and Chaos presents new developments in this rapidlydeveloping subject area. The presentation is more than merely theoretical, it specifically presents particular applications in a wide range of applications areas. Under the oceans, we consider the ways in which sponges and corals grow; we look, too, at the stability of ships on their surfaces. Land itself is modelled and applications to art, medicineand camouflage are presented. Readers should find general interest in the range of areas considered and should also be able to discover methods of value for their own specific areas of interest from studying the structure of related activities.
Since the mid 1990s, data hiding has been proposed as an enabling technology for securing multimedia communication and is now used in various applications including broadcast monitoring, movie fingerprinting, steganography, video indexing and retrieval and image authentication. Data hiding and cryptographic techniques are often combined to complement each other, thus triggering the development of a new research field of multimedia security. Besides, two related disciplines, steganalysis and data forensics, are increasingly attracting researchers and becoming another new research field of multimedia security. This journal, LNCS Transactions on Data Hiding and Multimedia Security, aims to be a forum for all researchers in these emerging fields, publishing both original and archival research results. The seven papers included in this special issue were carefully reviewed and selected from 21 submissions. They address the challenges faced by the emerging area of visual cryptography and provide the readers with an overview of the state of the art in this field of research.
This book focuses on interactive segmentation techniques, which have been extensively studied in recent decades. Interactive segmentation emphasizes clear extraction of objects of interest, whose locations are roughly indicated by human interactions based on high level perception. This book will first introduce classic graph-cut segmentation algorithms and then discuss state-of-the-art techniques, including graph matching methods, region merging and label propagation, clustering methods, and segmentation methods based on edge detection. A comparative analysis of these methods will be provided with quantitative and qualitative performance evaluation, which will be illustrated using natural and synthetic images. Also, extensive statistical performance comparisons will be made. Pros and cons of these interactive segmentation methods will be pointed out, and their applications will be discussed. There have been only a few surveys on interactive segmentation techniques, and those surveys do not cover recent state-of-the art techniques. By providing comprehensive up-to-date survey on the fast developing topic and the performance evaluation, this book can help readers learn interactive segmentation techniques quickly and thoroughly.
Image and Video Compression Standards: Algorithms and Architectures presents an introduction to the algorithms and architectures that underpin the image and video compression standards, including JPEG (compression of still images), H.261 (video teleconferencing), MPEG-1 and MPEG-2 (video storage and broadcasting). In addition, the book covers the MPEG and Dolby AC-3 audio encoding standards, as well as emerging techniques for image and video compression, such as those based on wavelets and vector quantization. The book emphasizes the foundations of these standards, i.e. techniques such as predictive coding, transform-based coding, motion compensation, and entropy coding, as well as how they are applied in the standards. How each standard is implemented is not dealt with, but the book does provide all the material necessary to understand the workings of each of the compression standards, including information that can be used to evaluate the efficiency of various software and hardware implementations conforming to the standards. Particular emphasis is placed on those algorithms and architectures that have been found to be useful in practical software or hardware implementations. Audience: A valuable reference for the graduate student, researcher or engineer. May also be used as a text for a course on the subject.
The development of a methodology for using logic databases is essential if new users are to be able to use these systems effectively to solve their problems, and this remains a largely unrealized goal. A workshop was organized in conjunction with the ILPS '93 Conference in Vancouver in October 1993 to provide a forum for users and implementors of deductive systems to share their experience. The emphasis was on the use of deductive systems. In addition to paper presentations, a number of systems were demonstrated. The papers of this book were drawn largely from the papers presented at the workshop, which have been extended and revised for inclusion here, and also include some papers describing interesting applications that were not discussed at the workshop. The applications described here should be seen as a starting point: a number of promising application domains are identified, and several interesting application packages are described, which provide the inspiration for further development.Declarative rule-based database systems hold a lot of promise in a wide range of application domains, and we need a continued stream of application development to better understand this potential and how to use it effectively. This book contains the broadest collection to date of papers describing implemented, significant applications of logic databases, and database systems as well as potential database users in such areas as scientific data management and complex decision support.
This book constitutes the refereed proceedings of the International Conference, VISIGRAPP 2011, the Joint Conference on Computer Vision, Theory and Applications (VISAPP), on Imaging Theory and Applications (IMAGAPP), on Computer Graphics Theory and Applications (GRAPP), and on Information Visualization Theory and Applications (IVAPP), held in Vilamoura, Portugal, in March 2011. The 15 revised full papers presented together with one invited paper were carefully reviewed and selected. The papers are organized in topical sections on computer graphics theory and applications; imaging theory and applications; information visualization theory and applications; and computer vision theory and applications.
INTRODUCTION TO COMPUTER-AIDED DESIGN OF USER INTERFACES l 2 Jean Vanderdonckt and Angel Puerta ,3 Jlnstitut d'Administration et de Gestion - Universite catholique de Louvain Place des Doyens, 1 - B-1348 Louvain-la-Neuve (Belgium) vanderdonckt@gant,ucl. ac,be , vanderdoncktj@acm,org Web: http://www. arpuerta. com JKnowledge Systems Laboratory, Stanford University, MSOB x215 Stanford, CA 94305-5479, USA puena@camis. stanford. edu 3RedWhaie Corp. , 277 Town & Country Village Palo Alto, CA 94303, USA puerta@ redwhale. com Web: http://www. redwhale. com Computer-Aided Design of Vser Interfaces (CADUI) is hereby referred to as the particular area of Human-Computer Interaction (HCI) intended to provide software support for any activity involved in the development life cycle of an interactive application, Such activities namely include task analysis, contextual inquiry [l], requirements definition, user-centred design, application modelling, conceptual design, prototyping, programming, in- stallation, test, evaluation, maintenance, Although very recently addressed (e. g. , [3]), the activity of re-designing an existing user interface (VI) for an interactive application and the activity of re-engineering a VI to rebuild its underlying models are also considered in CADVI. A fundamental aim of CADVI is not only to provide some software sup- port to the above activities, but also to incorporate strong and solid meth- odological aspects into the development, thus fostering abstraction reflection and leaving ad hoc development aside [5,7]. Incorporating such methodo- logical aspects inevitably covers three related, sometimes intertwined, facets: models, method and tools.
This book introduces the statistical software R to the image processing community in an intuitive and practical manner. R brings interesting statistical and graphical tools which are important and necessary for image processing techniques. Furthermore, it has been proved in the literature that R is among the most reliable, accurate and portable statistical software available. Both the theory and practice of R code concepts and techniques are presented and explained, and the reader is encouraged to try their own implementation to develop faster, optimized programs. Those who are new to the field of image processing and to R software will find this work a useful introduction. By reading the book alongside an active R session, the reader will experience an exciting journey of learning and programming.
This work focuses on central catadioptric systems, from the early step of calibration to high-level tasks such as 3D information retrieval. The book opens with a thorough introduction to the sphere camera model, along with an analysis of the relation between this model and actual central catadioptric systems. Then, a new approach to calibrate any single-viewpoint catadioptric camera is described. This is followed by an analysis of existing methods for calibrating central omnivision systems, and a detailed examination of hybrid two-view relations that combine images acquired with uncalibrated central catadioptric systems and conventional cameras. In the remaining chapters, the book discusses a new method to compute the scale space of any omnidirectional image acquired with a central catadioptric system, and a technique for computing the orientation of a hand-held omnidirectional catadioptric camera.
Due to its inherent time-scale locality characteristics, the discrete wavelet transform (DWT) has received considerable attention in signal/image processing. Wavelet transforms have excellent energy compaction characteristics and can provide perfect reconstruction. The shifting (translation) and scaling (dilation) are unique to wavelets. Orthogonality of wavelets with respect to dilations leads to multigrid representation. As the computation of DWT involves filtering, an efficient filtering process is essential in DWT hardware implementation. In the multistage DWT, coefficients are calculated recursively, and in addition to the wavelet decomposition stage, extra space is required to store the intermediate coefficients. Hence, the overall performance depends significantly on the precision of the intermediate DWT coefficients. This work presents new implementation techniques of DWT, that are efficient in terms of computation, storage, and with better signal-to-noise ratio in the reconstructed signal.
The flood of information through various computer networks such as the In ternet characterizes the world situation in which we live. Information worlds, often called virtual spaces and cyberspaces, have been formed on computer networks. The complexity of information worlds has been increasing almost exponentially through the exponential growth of computer networks. Such nonlinearity in growth and in scope characterizes information worlds. In other words, the characterization of nonlinearity is the key to understanding, utiliz ing and living with the flood of information. The characterization approach is by characteristic points such as peaks, pits, and passes, according to the Morse theory. Another approach is by singularity signs such as folds and cusps. Atoms and molecules are the other fundamental characterization ap proach. Topology and geometry, including differential topology, serve as the framework for the characterization. Topological Modeling for Visualization is a textbook for those interested in this characterization, to understand what it is and how to do it. Understanding is the key to utilizing information worlds and to living with the changes in the real world. Writing this textbook required careful preparation by the authors. There are complex mathematical concepts that require designing a writing style that facilitates understanding and appeals to the reader. To evolve a style, we set as a main goal of this book the establishment of a link between the theoretical aspects of modern geometry and topology, on the one hand, and experimental computer geometry, on the other.
This Special Edited Volume is a unique approach towards Computational solution for the upcoming field of study called Vision Science. From a scientific firmament Optics, Ophthalmology, and Optical Science has surpassed an Odyssey of optimizing configurations of Optical systems, Surveillance Cameras and other Nano optical devices with the metaphor of Nano Science and Technology. Still these systems are falling short of its computational aspect to achieve the pinnacle of human vision system. In this edited volume much attention has been given to address the coupling issues Computational Science and Vision Studies. It is a comprehensive collection of research works addressing various related areas of Vision Science like Visual Perception and Visual system, Cognitive Psychology, Neuroscience, Psychophysics and Ophthalmology, linguistic relativity, color vision etc. This issue carries some latest developments in the form of research articles and presentations. The volume is rich of contents with technical tools for convenient experimentation in Vision Science. There are 18 research papers having significance in an array of application areas. The volume claims to be an effective compendium of computing developments like Frequent Pattern Mining, Genetic Algorithm, Gabor Filter, Support Vector Machine, Region Based Mask Filter, 4D stereo camera systems, Principal Component Analysis etc. The detailed analysis of the papers can immensely benefit to the researchers of this domain. It can be an Endeavour in the pursuit of adding value in the existing stock of knowledge in Vision Science.
This book constitutes the refereed proceedings of the 7th International Workshop on Algorithms and Computation, WALCOM 2013, held in Kharagpur, India, in February 2013. The 29 full papers presented were carefully reviewed and selected from 86 submissions. The papers are organized in topical sections on computational geometry, approximation and randomized algorithms, parallel and distributed computing, graph algorithms, complexity and bounds, and graph drawing.
This book constitutes revised selected papers from the International Workshop on Clinical Image-Based Procedures, CLIP 2013, held in conjunction with MICCAI 2012 in Nagoya, Japan, in September 2013. The 19 papers presented in this volume were carefully reviewed and selected from 26 submissions. The workshop was a productive and exciting forum for the discussion and dissemination of clinically tested, state-of-the-art methods for image-based planning, monitoring and evaluation of medical procedures.
The two volumes LNCS 6553 and 6554 constitute the refereed post-proceedings of 7 workshops held in conjunction with the 11th European Conference on Computer Vision, held in Heraklion, Crete, Greece in September 2010. The 62 revised papers presented together with 2 invited talks were carefully reviewed and selected from numerous submissions. The first volume contains 26 revised papers and 2 invited talks selected from the following workshops: First International Workshop on Parts and Attributes; Third Workshop on Human Motion Understanding, Modeling, Capture and Animation; and International Workshop on Sign, Gesture and Activity (SGA 2010).
This volume contains the proceedings of the NATO Advanced Study Institute on "Pictorial Information Systems in Medicine" held August 27-September 7, 1984 in Hotel Maritim, Braunlage/Harz, Federal Republic of Germany. The program committee of the institute consisted of KH Hohne (Director), G. T Herman, G. S. Lodwick, and D. Meyer-Ebrecht. The organization was in the hands of Klaus Assmann and Fritz Bocker In the last decade medical imaging has undergone a rapid development New imaging modalities such as Computer Tomography (CT), Digital Angiography (DSA) and Magnetic Resonance Imaging (MRI) were developed using the capabilities of modern computers. In a modern hospital these technologies produce already more then 25% of image data in digital form. This format lends itself to the design of computer assisted Information systems Integrating data acquisition, presentation, communi cation and archiving for all modalities and users within a department or even a hospital. Advantages such as rapid access to any archived Image, synoptic presentation, computer assisted image analysis to name only a few, are expected. The design of such pictorial information systems, however, often called PACS (Picture Archiving and Communication Systems) In the medical community is a non-trivial task involving know-how from many disciplines such as - Medicine (especially Radiology), - Data Base Technology, - Computer Graphics, - Man Machine Interaction, - Hardware Technology and others. Most of these disCiplines are represented by disjunct scientific communities."
About four or five years ago one began to hear about the enormous interest being taken in on-line consoles and displays. Nothing much was done with them, but computer men felt that this was the way computing ought to go: one might dispense with cards, and overcome many of the problems of man-machine communication. It quickly appeared that, as with computers, there had been a great under estimation of the amount of work involved, of the difficulties of programming, and of the cost. So it began to emerge that graphics was not the ultimate answer, in spite of superb demonstrations where one might watch a square being converted into a cube and then rotated. But my mind goes back to 1951 and the first computers. There, there were demonstrations of arithmetic speed and storage facility; but not much idea of actual use. However, we now understand how to use computers, and in the last year or two, significant developments in the field of graphics have led to genuine applications, and economic benefits. The equipment is still expensive, but it is becoming cheaper, more uses are being found, and f believe that we are just at the stage when the subject is gaining momentum, to become, like computers, a field of immense importance."
Alias|Wavefront's Maya 3D animation software is an integrated collection of tools for creating computer generated images, used in nearly every blockbuster special effects film that has been released in the last few years. The first choice for digital content creators, Maya combines animation, dynamics, modelling and rendering tools, enabling you to create digital characters and visual effects for live action films or stand-alone animation.
Machine vision technology has revolutionised the process of automated inspection in manufacturing. The specialist techniques required for inspection of natural products, such as food, leather, textiles and stone is still a challenging area of research. Topological variations make image processing algorithm development, system integration and mechanical handling issues much more complex. The practical issues of making machine vision systems operate robustly in often hostile environments together with the latest technological advancements are reviewed in this volume. Features: - Case studies based on real-world problems to demonstrate the practical application of machine vision systems. - In-depth description of system components including image processing, illumination, real-time hardware, mechanical handling, sensing and on-line testing. - Systems-level integration of constituent technologies for bespoke applications across a variety of industries. - A diverse range of example applications that a system may be required to handle from live fish to ceramic tiles. Machine Vision for the Inspection of Natural Products will be a valuable resource for researchers developing innovative machine vision systems in collaboration with food technology, textile and agriculture sectors. It will also appeal to practising engineers and managers in industries where the application of machine vision can enhance product safety and process efficiency.
Parallel processing is seen today as the means to improve the power of computing facilities by breaking the Von Neumann bottleneck of conventional sequential computer architectures. By defining appropriate parallel computation models definite advantages can be obtained. Parallel processing is the center of the research in Europe in the field of Information Processing Systems so the CEC has funded the ESPRIT Supemode project to develop a low cost, high performance, multiprocessor machine. The result of this project is a modular, reconfigurable architecture based on !NMOS transputers: T.Node. This machine can be considered as a research, industrial and commercial success. The CEC has decided to continue to encourage manufacturers as well as research and end-users of transputers by funding other projects in this field. This book presents course papers of the Eurocourse given at the Joint Research Centre in ISPRA (Italy) from the 4th to 8 of November 1991. First we present an overview of various trends in the design of parallel architectures and specially of the T.Node with it's software development environments, new distributed system aspects and also new hardware extensions based on the !NMOS T9000 processor. In a second part, we review some real case applications in the field of image synthesis, image processing, signal processing, terrain modeling, particle physics simulation and also enhanced parallel and distributed numerical methods on T.Node.
Cellular Automata Transforms describes a new approach to using the dynamical system, popularly known as cellular automata (CA), as a tool for conducting transforms on data. Cellular automata have generated a great deal of interest since the early 1960s when John Conway created the `Game of Life'. This book takes a more serious look at CA by describing methods by which information building blocks, called basis functions (or bases), can be generated from the evolving states. These information blocks can then be used to construct any data. A typical dynamical system such as CA tend to involve an infinite possibilities of rules that define the inherent elements, neighborhood size, shape, number of states, and modes of association, etc. To be able to build these building blocks an elegant method had to be developed to address a large subset of these rules. A new formula, which allows for the definition a large subset of possible rules, is described in the book. The robustness of this formula allows searching of the CA rule space in order to develop applications for multimedia compression, data encryption and process modeling. Cellular Automata Transforms is divided into two parts. In Part I the fundamentals of cellular automata, including the history and traditional applications are outlined. The challenges faced in using CA to solve practical problems are described. The basic theory behind Cellular Automata Transforms (CAT) is developed in this part of the book. Techniques by which the evolving states of a cellular automaton can be converted into information building blocks are taught. The methods (including fast convolutions) by which forward and inverse transforms of any data can be achieved are also presented. Part II contains a description of applications of CAT. Chapter 4 describes digital image compression, audio compression and synthetic audio generation, three approaches for compressing video data. Chapter 5 contains both symmetric and public-key implementation of CAT encryption. Possible methods of attack are also outlined. Chapter 6 looks at process modeling by solving differential and integral equations. Examples are drawn from physics and fluid dynamics.
Advances in electronics, communications, and the fast growth of the Internet have made the use of a wide variety of computing devices an every day occurrence. These computing devices have different interaction styles, input/output techniques, modalities, characteristics, and contexts of use. Furthermore, users expect to access their data and run the same application from any of these devices. Two of the problems we encountered in our own work [2] in building VIs for different platforms were the different layout features and screen sizes associated with each platform and device. Dan Ol sen [13], Peter Johnson [9], and Stephen Brewster, et al. [4] all talk about problems in interaction due to the diversity of interactive platforms, devices, network services and applications. They also talk about the problems associ ated with the small screen size of hand-held devices. In comparison to desk top computers, hand-held devices will always suffer from a lack of screen real estate, so new metaphors of interaction have to be devised for such de vices. It is difficult to develop a multi-platform user interface (VI) without duplicating development effort. Developers now face the daunting task to build UIs that must work across multiple devices. There have been some ap proaches towards solving this problem of multi-platform VI development in cluding XWeb [14]. Building "plastic interfaces" [5,20] is one such method in which the VIs are designed to "withstand variations of context of use while preserving usability".
With the ubiquity of new information technology and media, more effective and friendly methods for human computer interaction (HCI) are being developed which do not rely on traditional devices such as keyboards, mice and displays. The first step for any intelligent HCI system is face detection, and one of most friendly HCI systems is hand gesture. Face Detection and Gesture Recognition for Human-Computer Interaction introduces the frontiers of vision-based interfaces for intelligent human computer interaction with focus on two main issues: face detection and gesture recognition. The first part of the book reviews and discusses existing face detection methods, followed by a discussion on future research. Performance evaluation issues on the face detection methods are also addressed. The second part discusses an interesting hand gesture recognition method based on a generic motion segmentation algorithm. The system has been tested with gestures from American Sign Language with promising results. We conclude this book with comments on future work in face detection and hand gesture recognition.Face Detection and Gesture Recognition for Human-Computer Interaction will interest those working in vision-based interfaces for intelligent human computer interaction. It also contains a comprehensive survey on existing face detection methods, which will serve as the entry point for new researchers embarking on such topics. Furthermore, this book also covers in-depth discussion on motion segmentation algorithms and applications, which will benefit more seasoned graduate students or researchers interested in motion pattern recognition.
Traditionally, scientific fields have defined boundaries, and scientists work on research problems within those boundaries. However, from time to time those boundaries get shifted or blurred to evolve new fields. For instance, the original goal of computer vision was to understand a single image of a scene, by identifying objects, their structure, and spatial arrangements. This has been referred to as image understanding. Recently, computer vision has gradually been making the transition away from understanding single images to analyz ing image sequences, or video understanding. Video understanding deals with understanding of video sequences, e. g. , recognition of gestures, activities, fa cial expressions, etc. The main shift in the classic paradigm has been from the recognition of static objects in the scene to motion-based recognition of actions and events. Video understanding has overlapping research problems with other fields, therefore blurring the fixed boundaries. Computer graphics, image processing, and video databases have obvious overlap with computer vision. The main goal of computer graphics is to gener ate and animate realistic looking images, and videos. Researchers in computer graphics are increasingly employing techniques from computer vision to gen erate the synthetic imagery. A good example of this is image-based rendering and modeling techniques, in which geometry, appearance, and lighting is de rived from real images using computer vision techniques. Here the shift is from synthesis to analysis followed by synthesis. |
You may like...
Diagnostic Biomedical Signal and Image…
Kemal Polat, Saban Ozturk
Paperback
R2,952
Discovery Miles 29 520
Cardiovascular and Coronary Artery…
Ayman S. El-Baz, Jasjit S. Suri
Paperback
R3,897
Discovery Miles 38 970
Recent Trends in Computer-aided…
Saptarshi Chatterjee, Debangshu Dey, …
Paperback
R2,570
Discovery Miles 25 700
Image Processing for Automated Diagnosis…
Kalpana Chauhan, Rajeev Kumar Chauhan
Paperback
R3,487
Discovery Miles 34 870
Advancements in Bio-Medical Image…
Rijwan Khan, Indrajeet Kumar
Hardcover
R7,955
Discovery Miles 79 550
Radiomics and Its Clinical Application…
Jie Tian, Di Dong, …
Paperback
R2,536
Discovery Miles 25 360
Intelligent Image and Video Compression…
David R. Bull, Fan Zhang
Paperback
R2,606
Discovery Miles 26 060
|