![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Image processing > General
Software Visualization: From Theory to Practice was initially
selected as a special volume for "The Annals of Software
Engineering (ANSE) Journal," which has been discontinued. This
special edited volume, is the first to discuss software
visualization in the perspective of software engineering. It is a
collection of 14 chapters on software visualization, covering the
topics from theory to practical systems. The chapters are divided
into four Parts: Visual Formalisms, Human Factors, Architectural
Visualization, and Visualization in Practice. They cover a
comprehensive range of software visualization topics, including
Software Visualization: From Theory to Practice is designed to meet the needs of both an academic and a professional audience composed of researchers and software developers. This book is also suitable for senior undergraduate and graduate students in software engineering and computer science, as a secondary text or a reference.
Sampling, wavelets, and tomography are three active areas of contemporary mathematics sharing common roots that lie at the heart of harmonic and Fourier analysis. The advent of new techniques in mathematical analysis has strengthened their interdependence and led to some new and interesting results in the field. This state-of-the-art book not only presents new results in these research areas, but it also demonstrates the role of sampling in both wavelet theory and tomography. Specific topics covered include: * Robustness of Regular Sampling in Sobolev Algebras * Irregular and Semi-Irregular Weyl-Heisenberg Frames * Adaptive Irregular Sampling in Meshfree Flow Simulation * Sampling Theorems for Non-Bandlimited Signals * Polynomial Matrix Factorization, Multidimensional Filter Banks, and Wavelets * Generalized Frame Multiresolution Analysis of Abstract Hilbert Spaces * Sampling Theory and Parallel-Beam Tomography * Thin-Plate Spline Interpolation in Medical Imaging * Filtered Back-Projection Algorithms for Spiral Cone Computed Tomography Aimed at mathematicians, scientists, and engineers working in signal and image processing and medical imaging, the work is designed to be accessible to an audience with diverse mathematical backgrounds. Although the volume reflects the contributions of renowned mathematicians and engineers, each chapter has an expository introduction written for the non-specialist. One of the key features of the book is an introductory chapter stressing the interdependence of the three main areas covered. A comprehensive index completes the work. Contributors: J.J. Benedetto, N.K. Bose, P.G. Casazza, Y.C. Eldar, H.G. Feichtinger, A. Faridani, A. Iske, S. Jaffard, A. Katsevich, S. Lertrattanapanich, G. Lauritsch, B. Mair, M. Papadakis, P.P. Vaidyanathan, T. Werther, D.C. Wilson, A.I. Zayed
This book constitutes the refereed proceedings of the First International Symposium on Communicability, Computer Graphics and Innovative Design for Interactive Systems, held in Cordoba, Spain, in June 2011. The 13 revised full papers presented were carefully reviewed and selected from various submissions. They examine latest breakthroughs and future trends within the communicability, computer graphics, and innovative design of interactive systems.
"Advances in computer technology and developments such as the Internet provide a constant momentum to design new techniques and algorithms to support computer graphics. Modelling, animation and rendering remain principal topics in the filed of computer graphics and continue to attract researchers around the world." This volume contains the papers presented at Computer Graphics International 2002, in July, at the University of Bradford, UK. These papers represent original research in computer graphics from around the world and cover areas such as: - Real-time computer animation - Image based rendering - Non photo-realistic rendering - Virtual reality - Avatars - Geometric and solid modelling - Computational geometry - Physically based modelling - Graphics hardware architecture - Data visualisation - Data compression The focus is on the commercial application and industrial use of computer graphics and digital media systems.
This book constitutes the refereed proceedings of the 4th
International Conference on Progress in Cultural Heritage
Preservation, EuroMed 2012, held in Lemesos, Cyprus, in
October/November 2012.
From the reviews: "This book offers a coherent treatment, at the graduate textbook level, of the field that has come to be known in the last decade or so as computational geometry. ... ... The book is well organized and lucidly written; a timely contribution by two founders of the field. It clearly demonstrates that computational geometry in the plane is now a fairly well-understood branch of computer science and mathematics. It also points the way to the solution of the more challenging problems in dimensions higher than two." #Mathematical Reviews#1 "... This remarkable book is a comprehensive and systematic study on research results obtained especially in the last ten years. The very clear presentation concentrates on basic ideas, fundamental combinatorial structures, and crucial algorithmic techniques. The plenty of results is clever organized following these guidelines and within the framework of some detailed case studies. A large number of figures and examples also aid the understanding of the material. Therefore, it can be highly recommended as an early graduate text but it should prove also to be essential to researchers and professionals in applied fields of computer-aided design, computer graphics, and robotics." #Biometrical Journal#2
Since the study of wavelets is a relatively new area, much of the research coming from mathematicians, most of the literature uses terminology, concepts and proofs that may, at times, be difficult and intimidating for the engineer. Wavelet Basics has therefore been written as an introductory book for scientists and engineers. The mathematical presentation has been kept simple, the concepts being presented in elaborate detail in a terminology that engineers will find familiar. Difficult ideas are illustrated with examples which will also aid in the development of an intuitive insight. Chapter 1 reviews the basics of signal transformation and discusses the concepts of duals and frames. Chapter 2 introduces the wavelet transform, contrasts it with the short-time Fourier transform and clarifies the names of the different types of wavelet transforms. Chapter 3 links multiresolution analysis, orthonormal wavelets and the design of digital filters. Chapter 4 gives a tour d'horizon of topics of current interest: wavelet packets and discrete time wavelet transforms, and concludes with applications in signal processing.
This book is aimed at those using colour image processing or researching new applications or techniques of colour image processing. It has been clear for some time that there is a need for a text dedicated to colour. We foresee a great increase in the use of colour over the coming years, both in research and in industrial and commercial applications. We are sure this book will prove a useful reference text on the subject for practicing engineers and scientists, for researchers, and for students at doctoral and, perhaps masters, level. It is not intended as an introductory text on image processing, rather it assumes that the reader is already familiar with basic image processing concepts such as image representation in digital form, linear and non-linear filtering, trans forms, edge detection and segmentation, and so on, and has some experience with using, at the least, monochrome equipment. There are many books cov ering these topics and some of them are referenced in the text, where appro priate. The book covers a restricted, but nevertheless, a very important, subset of image processing concerned with natural colour (that is colour as per ceived by the human visual system). This is an important field because it shares much technology and basic theory with colour television and video equipment, the market for which is worldwide and very large; and with the growing field of multimedia, including the use of colour images on the Inter net.
This book comprises the refereed proceedings of the International Conferences, SIP, WSE, and ICHCI 2012, held in conjunction with GST 2012 on Jeju Island, Korea, in November/December 2012. The papers presented were carefully reviewed and selected from numerous submissions and focus on the various aspects of signal processing, image processing, and pattern recognition, and Web science and engineering, and human computer interaction.
Mathematical morphology is a powerful methodology for the processing and analysis of geometric structure in signals and images. This book contains the proceedings of the fifth International Symposium on Mathematical Morphology and its Applications to Image and Signal Processing, held June 26-28, 2000, at Xerox PARC, Palo Alto, California. It provides a broad sampling of the most recent theoretical and practical developments of mathematical morphology and its applications to image and signal processing. Areas covered include: decomposition of structuring functions and morphological operators, morphological discretization, filtering, connectivity and connected operators, morphological shape analysis and interpolation, texture analysis, morphological segmentation, morphological multiresolution techniques and scale-spaces, and morphological algorithms and applications. Audience: The subject matter of this volume will be of interest to electrical engineers, computer scientists, and mathematicians whose research work is focused on the theoretical and practical aspects of nonlinear signal and image processing. It will also be of interest to those working in computer vision, applied mathematics, and computer graphics.
"This book is concerned with a probabilistic approach for image analysis, mostly from the Bayesian point of view, and the important Markov chain Monte Carlo methods commonly used....This book will be useful, especially to researchers with a strong background in probability and an interest in image analysis. The author has presented the theory with rigor he doesn t neglect applications, providing numerous examples of applications to illustrate the theory." -- MATHEMATICAL REVIEWS"
This book constitutes the refereed proceedings of the International Conference on Artificial Intelligence and Computational Intelligence, AICI 2012, held in Chengdu, China, in October 2012. The 163 revised full papers presented were carefully reviewed and selected from 724 submissions. The papers are organized in topical sections on applications of artificial intelligence; applications of computational intelligence; data mining and knowledge discovering; evolution strategy; intelligent image processing; machine learning; neural networks; pattern recognition.
This book is a collection of several tutorials from the EUROGRAPHICS '90 conference in Montreux. The conference was held under the motto "IMAGES: Synthesis, Analysis and Interaction", and the tutorials, partly presented in this volume, reflect the conference theme. As such, this volume provides a unique collection of advanced texts on 'traditional' com puter graphics as well as of tutorials on image processing and image reconstruction. As with all the volumes of the series "Advances in Computer Graphics", the contributors are leading experts in their respective fields. The chapter Design and Display of Solid Models provides an extended introduction to interactive graphics techniques for design, fast display, and high-quality rendering of solid models. The text focuses on techniques for Constructive Solid Geometry (CSG). The follow ing topics are treated in depth: interactive design techniques (specification of curves, surfaces and solids; graphical user interfaces; procedural languages and direct manipulation) and display techniques (depth-buffer, scan-line and ray-tracing techniques; CSG classification techniques; efficiency-improving methods; software and hardware implementations).
Analyzing Video Sequences of Multiple Humans: Tracking, Posture Estimation and Behavior Recognition describes some computer vision-based methods that analyze video sequences of humans. More specifically, methods for tracking multiple humans in a scene, estimating postures of a human body in 3D in real-time, and recognizing a person's behavior (gestures or activities) are discussed. For the tracking algorithm, the authors developed a non-synchronous method that tracks multiple persons by exploiting a Kalman filter that is applied to multiple video sequences. For estimating postures, an algorithm is presented that locates the significant points which determine postures of a human body, in 3D in real-time. Human activities are recognized from a video sequence by the HMM (Hidden Markov Models)-based method that the authors pioneered. The effectiveness of the three methods is shown by experimental results.
This book constitutes the refereed proceedings of the Second International Conference on Intelligent Interactive Technologies and Multimedia, IITM 2013, held in Allahabad, India, in March 2013. The 15 revised full papers and the 12 revised short papers were carefully reviewed and selected from more than 90 submissions. The papers present the latest research and development in the areas of intelligent interactive technologies, human-computer interaction and multimedia.
This book constitutes the refereed proceedings of the 24th International Symposium on Algorithms and Computation, ISAAC 2013, held in Hong Kong, China in December 2013. The 67 revised full papers presented together with 2 invited talks were carefully reviewed and selected from 177 submissions for inclusion in the book. The focus of the volume in on the following topics: computation geometry, pattern matching, computational complexity, internet and social network algorithms, graph theory and algorithms, scheduling algorithms, fixed-parameter tractable algorithms, algorithms and data structures, algorithmic game theory, approximation algorithms and network algorithms.
The author begins with a basic introduction to robot control and then considers the important problems to be overcome: delays or noisy control lines, feedback and response information, and predictive displays. Readers are assumed to have a basic understanding of robotics, though this may be their first exposure to the subject of telerobotics. Both professional engineers and roboticists will find this an invaluable introduction to this subject.
Image Technology Design: A Perceptual Approach is an essential reference for both academic and professional researchers in the fields of image technology, image processing and coding, image display, and image quality. It bridges the gap between academic research on visual perception and image quality and applications of such research in the design of imaging systems. This book has been written from the point of view of an electrical engineer interested in the display, processing and coding of images, and frequently involved in applying knowledge from visual psychophysics, experimental psychology, statistics, etc., to the design of imaging systems. It focuses on the exchange of ideas between technical disciplines in image technology design (such as image display or printer design and image processing) and visual psychophysics. This is accomplished by the consistent use of a single mathematical approach (based on linear vector spaces) throughout. Known facts from color vision, image sampling and quantization are given a new formulation and, in some instances, a new interpretation.
This book constitutes revised selected papers from the 9th International Gesture Workshop, GW 2011, held in Athens, Greece, in May 2011. The 24 papers presented were carefully reviewed and selected from 35 submissions. They are ordered in five sections named: human computer interaction; cognitive processes; notation systems and animation; gestures and signs: linguistic analysis and tools; and gestures and speech.
Mathematical morphology (MM) is a theory for the analysis of spatial structures. It is called morphology since it aims at analysing the shape and form of objects, and it is mathematical in the sense that the analysis is based on set theory, topology, lattice algebra, random functions, etc. MM is not only a theory, but also a powerful image analysis technique. The purpose of the present book is to provide the image analysis community with a snapshot of current theoretical and applied developments of MM. The book consists of forty-five contributions classified by subject. It demonstrates a wide range of topics suited to the morphological approach.
Fourier Vision provides a new treatment of figure-ground segmentation in scenes comprising transparent, translucent, or opaque objects. Exploiting the relative motion between figure and ground, this technique deals explicitly with the separation of additive signals and makes no assumptions about the spatial or spectral content of the images, with segmentation being carried out phasor by phasor in the Fourier domain. It works with several camera configurations, such as camera motion and short-baseline binocular stereo, and performs best on images with small velocities/displacements, typically one to ten pixels per frame. The book also addresses the use of Fourier techniques to estimate stereo disparity and optical flow. Numerous examples are provided throughout. Fourier Vision will be of value to researchers in image processing & computer vision and, especially, to those who have to deal with superimposed transparent or translucent objects. Researchers in application areas such as medical imaging and acoustic signal processing will also find this of interest.
Statistical Processing Techniques for Noisy Images presents a statistical framework to design algorithms for target detection, tracking, segmentation and classification (identification). Its main goal is to provide the reader with efficient tools for developing algorithms that solve his/her own image processing applications. In particular, such topics as hypothesis test-based detection, fast active contour segmentation and algorithm design for non-conventional imaging systems are comprehensively treated, from theoretical foundations to practical implementations. With a large number of illustrations and practical examples, this book serves as an excellent textbook or reference book for senior or graduate level courses on statistical signal/image processing, as well as a reference for researchers in related fields.
The three volume set LNCS 7583, 7584 and 7585 comprises the Workshops and Demonstrations which took place in connection with the European Conference on Computer Vision, ECCV 2012, held in Firenze, Italy, in October 2012. The total of 179 workshop papers and 23 demonstration papers was carefully reviewed and selected for inclusion in the proceedings. They where held at workshops with the following themes: non-rigid shape analysis and deformable image alignment; visual analysis and geo-localization of large-scale imagery; Web-scale vision and social media; video event categorization, tagging and retrieval; re-identification; biological and computer vision interfaces; where computer vision meets art; consumer depth cameras for computer vision; unsolved problems in optical flow and stereo estimation; what's in a face?; color and photometry in computer vision; computer vision in vehicle technology: from earth to mars; parts and attributes; analysis and retrieval of tracked events and motion in imagery streams; action recognition and pose estimation in still images; higher-order models and global constraints in computer vision; information fusion in computer vision for concept recognition; 2.5D sensing technologies in motion: the quest for 3D; benchmarking facial image analysis technologies.
This book constitutes the refereed proceedings of the Third Workshop on Human Behavior Understanding, HBU 2012, held in Vilamoura, Portugal, in October 2012. The 14 revised papers presented were carefully reviewed and selected from 31 submissions. The papers are organized in topical sections on sensing human behavior; social and affective signals; human-robot interaction; imitation and learning from demonstration.
Still Image Compression on Parallel Computer Architectures investigates the application of parallel-processing techniques to digital image compression. Digital image compression is used to reduce the number of bits required to store an image in computer memory and/or transmit it over a communication link. Over the past decade advancements in technology have spawned many applications of digital imaging, such as photo videotex, desktop publishing, graphics arts, color facsimile, newspaper wire phototransmission and medical imaging. For many other contemporary applications, such as distributed multimedia systems, rapid transmission of images is necessary. Dollar cost as well as time cost of transmission and storage tend to be directly proportional to the volume of data. Therefore, application of digital image compression techniques becomes necessary to minimize costs. A number of digital image compression algorithms have been developed and standardized. With the success of these algorithms, research effort is now directed towards improving implementation techniques. The Joint Photographic Experts Group (JPEG) and Motion Photographic Experts Group(MPEG) are international organizations which have developed digital image compression standards. Hardware (VLSI chips) which implement the JPEG image compression algorithm are available. Such hardware is specific to image compression only and cannot be used for other image processing applications. A flexible means of implementing digital image compression algorithms is still required. An obvious method of processing different imaging applications on general purpose hardware platforms is to develop software implementations. JPEG uses an 8 x 8 block of image samples as the basic element for compression. These blocks are processed sequentially. There is always the possibility of having similar blocks in a given image. If similar blocks in an image are located, then repeated compression of these blocks is not necessary. By locating similar blocks in the image, the speed of compression can be increased and the size of the compressed image can be reduced. Based on this concept an enhancement to the JPEG algorithm is proposed, called Bock Comparator Technique (BCT). Still Image Compression on Parallel Computer Architectures is designed for advanced students and practitioners of computer science. This comprehensive reference provides a foundation for understanding digital image compression techniques and parallel computer architectures. |
You may like...
Image Processing for Automated Diagnosis…
Kalpana Chauhan, Rajeev Kumar Chauhan
Paperback
R3,487
Discovery Miles 34 870
Intelligent Image and Video Compression…
David R. Bull, Fan Zhang
Paperback
R2,606
Discovery Miles 26 060
Cognitive Systems and Signal Processing…
Yudong Zhang, Arun Kumar Sangaiah
Paperback
R2,587
Discovery Miles 25 870
Diagnostic Biomedical Signal and Image…
Kemal Polat, Saban Ozturk
Paperback
R2,952
Discovery Miles 29 520
Cardiovascular and Coronary Artery…
Ayman S. El-Baz, Jasjit S. Suri
Paperback
R3,802
Discovery Miles 38 020
|