Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Showing 1 - 17 of 17 matches in All Departments
Blind deconvolution is a classical image processing problem which has been investigated by a large number of researchers over the last four decades. The purpose of this monograph is not to propose yet another method for blind image restoration. Rather the basic issue of deconvolvability has been explored from a theoretical view point. Some authors claim very good results while quite a few claim that blind restoration does not work. The authors clearly detail when such methods are expected to work and when they will not. In order to avoid the assumptions needed for convergence analysis in the Fourier domain, the authors use a general method of convergence analysis used for alternate minimization based on three point and four point properties of the points in the image space. The authors prove that all points in the image space satisfy the three point property and also derive the conditions under which four point property is satisfied. This provides the conditions under which alternate minimization for blind deconvolution converges with a quadratic prior. Since the convergence properties depend on the chosen priors, one should design priors that avoid trivial solutions. Hence, a sparsity based solution is also provided for blind deconvolution, by using image priors having a cost that increases with the amount of blur, which is another way to prevent trivial solutions in joint estimation. This book will be a highly useful resource to the researchers and academicians in the specific area of blind deconvolution.
Hyperspectral Image Fusion is the first text dedicated to the fusion techniques for such a huge volume of data consisting of a very large number of images. This monograph brings out recent advances in the research in the area of visualization of hyperspectral data. It provides a set of pixel-based fusion techniques, each of which is based on a different framework and has its own advantages and disadvantages. The techniques are presented with complete details so that practitioners can easily implement them. It is also demonstrated how one can select only a few specific bands to speed up the process of fusion by exploiting spatial correlation within successive bands of the hyperspectral data. While the techniques for fusion of hyperspectral images are being developed, it is also important to establish a framework for objective assessment of such techniques. This monograph has a dedicated chapter describing various fusion performance measures that are applicable to hyperspectral image fusion. This monograph also presents a notion of consistency of a fusion technique which can be used to verify the suitability and applicability of a technique for fusion of a very large number of images. This book will be a highly useful resource to the students, researchers, academicians and practitioners in the specific area of hyperspectral image fusion, as well as generic image fusion.
Super-Resolution Imaging serves as an essential reference for both academicians and practicing engineers. It can be used both as a text for advanced courses in imaging and as a desk reference for those working in multimedia, electrical engineering, computer science, and mathematics. The first book to cover the new research area of super-resolution imaging, this text includes work on the following groundbreaking topics: Image zooming based on wavelets and generalized interpolation; Super-resolution from sub-pixel shifts; Use of blur as a cue; Use of warping in super-resolution; Resolution enhancement using multiple apertures; Super-resolution from motion data; Super-resolution from compressed video; Limits in super-resolution imaging. Written by the leading experts in the field, Super-Resolution Imaging presents a comprehensive analysis of current technology, along with new research findings and directions for future work.
This book presents a unique guide to heritage preservation problems and the corresponding state-of-the-art digital techniques to achieve their plausible solutions. It covers various methods, ranging from data acquisition and digital imaging to computational methods for reconstructing the original (pre-damaged) appearance of heritage artefacts.The case studies presented here are mostly drawn from India's tangible and non-tangible heritage, which is very rich and multi-dimensional. The contributing authors have been working in their respective fields for years and present their methods so lucidly that they can be easily reproduced and implemented by general practitioners of heritage curation. The preservation methods, reconstruction methods, and corresponding results are all illustrated with a wealth of colour figures and images.The book consists of sixteen chapters that are divided into five broad sections, namely (i) Digital System for Heritage Preservation, (ii) Signal and Image Processing, (iii) Audio and Video Processing, (iv) Image and Video Database, and (v) Architectural Modelling and Visualization. The first section presents various state-of-the-art tools and technologies for data acquisition including an interactive graphical user interface (GUI) annotation tool and a specialized imaging system for generating the realistic visual forms of the artefacts. Numerous useful methods and algorithms for processing vocal, visual and tactile signals related to heritage preservation are presented in the second and third sections. In turn, the fourth section provides two important image and video databases, catering to members of the computer vision community with an interest in the domain of digital heritage. Finally, examples of reconstructing ruined monuments on the basis of historic documents are presented in the fifth section. In essence, this book offers a pragmatic appraisal of the uses of digital technology in the various aspects of preservation of tangible and intangible heritages.
This book presents various video processing methodologies that are useful for distance education. The motivation is to devise new multimedia technologies that are suitable for better representation of instructional videos by exploiting the temporal redundancies present in the original video. This solves many of the issues related to the memory and bandwidth limitation of lecture videos. The various methods described in the book focus on a key-frame based approach which is used to time shrink, repackage and retarget instructional videos. All the methods need a preprocessing step of shot detection and recognition, which is separately given as a chapter. We find those frames which are well-written and distinct as key-frames. A super-resolution based image enhancement scheme is suggested for refining the key-frames for better legibility. These key-frames, along with the audio and a meta-data for the mutual linkage among various media components form a repackaged lecture video, which on a programmed playback, render an estimate of the original video but at a substantially compressed form. The book also presents a legibility retentive retargeting of this instructional media on mobile devices with limited display size. All these technologies contribute to the enhancement of the outreach of distance education programs. Distance education is now a big business with an annual turnover of over 10-12 billion dollars. We expect this to increase rapidly. Use of the proposed technology will help deliver educational videos to those who are less endowed in terms of network bandwidth availability and to those everywhere who are even on a move by delivering it effectively to mobile handsets (including PDAs). Thus, technology developers, practitioners, and content providers will find the material very useful.
Computer vision is becoming increasingly important in several industrial applications such as automated inspection, robotic manipulations and autonomous vehicle guidance. These tasks are performed in a 3-D world and it is imperative to gather reliable information on the 3-D structure of the scene. This book is about passive techniques for depth recovery, where the scene is illuminated only by natural light as opposed to active methods where a special lighting device is used for scene illumination. Passive methods have a wider range of applicability and also correspond to the way humans infer 3-D structure from visual images.
Ambulation Analysis in Wearable ECG Subhasis Chaudhuri, Tanmay Pawar, Siddhartha Duttagupta Ambulation Analysis in Wearable ECG demonstrates why, due to recent developments, the wearable ECG recorder substantiates a significant innovation in the healthcare field. About this book:
Motion-Free Super-Resolution is a compilation of very recent work on various methods of generating super-resolution (SR) images from a set of low-resolution images. The current literature on this topic deals primarily with the use of motion cues for the purpose of generating SR images. These cues have, it is shown, their advantages and disadvantages. In contrast, this book shows that cues other than motion can also be used for the same purpose, and addresses both the merits and demerits of these new techniques. Motion-Free Super-Resolution supersedes much of the lead author 's previous edited volume, "Super-Resolution Imaging," and includes an up-to-date account of the latest research efforts in this fast-moving field. This sequel also features a style of presentation closer to that of a textbook, with an emphasis on teaching and explanation rather than scholarly presentation.
This book presents a unique guide to heritage preservation problems and the corresponding state-of-the-art digital techniques to achieve their plausible solutions. It covers various methods, ranging from data acquisition and digital imaging to computational methods for reconstructing the original (pre-damaged) appearance of heritage artefacts.The case studies presented here are mostly drawn from India's tangible and non-tangible heritage, which is very rich and multi-dimensional. The contributing authors have been working in their respective fields for years and present their methods so lucidly that they can be easily reproduced and implemented by general practitioners of heritage curation. The preservation methods, reconstruction methods, and corresponding results are all illustrated with a wealth of colour figures and images.The book consists of sixteen chapters that are divided into five broad sections, namely (i) Digital System for Heritage Preservation, (ii) Signal and Image Processing, (iii) Audio and Video Processing, (iv) Image and Video Database, and (v) Architectural Modelling and Visualization. The first section presents various state-of-the-art tools and technologies for data acquisition including an interactive graphical user interface (GUI) annotation tool and a specialized imaging system for generating the realistic visual forms of the artefacts. Numerous useful methods and algorithms for processing vocal, visual and tactile signals related to heritage preservation are presented in the second and third sections. In turn, the fourth section provides two important image and video databases, catering to members of the computer vision community with an interest in the domain of digital heritage. Finally, examples of reconstructing ruined monuments on the basis of historic documents are presented in the fifth section. In essence, this book offers a pragmatic appraisal of the uses of digital technology in the various aspects of preservation of tangible and intangible heritages.
Blind deconvolution is a classical image processing problem which has been investigated by a large number of researchers over the last four decades. The purpose of this monograph is not to propose yet another method for blind image restoration. Rather the basic issue of deconvolvability has been explored from a theoretical view point. Some authors claim very good results while quite a few claim that blind restoration does not work. The authors clearly detail when such methods are expected to work and when they will not. In order to avoid the assumptions needed for convergence analysis in the Fourier domain, the authors use a general method of convergence analysis used for alternate minimization based on three point and four point properties of the points in the image space. The authors prove that all points in the image space satisfy the three point property and also derive the conditions under which four point property is satisfied. This provides the conditions under which alternate minimization for blind deconvolution converges with a quadratic prior. Since the convergence properties depend on the chosen priors, one should design priors that avoid trivial solutions. Hence, a sparsity based solution is also provided for blind deconvolution, by using image priors having a cost that increases with the amount of blur, which is another way to prevent trivial solutions in joint estimation. This book will be a highly useful resource to the researchers and academicians in the specific area of blind deconvolution.
Hyperspectral Image Fusion is the first text dedicated to the fusion techniques for such a huge volume of data consisting of a very large number of images. This monograph brings out recent advances in the research in the area of visualization of hyperspectral data. It provides a set of pixel-based fusion techniques, each of which is based on a different framework and has its own advantages and disadvantages. The techniques are presented with complete details so that practitioners can easily implement them. It is also demonstrated how one can select only a few specific bands to speed up the process of fusion by exploiting spatial correlation within successive bands of the hyperspectral data. While the techniques for fusion of hyperspectral images are being developed, it is also important to establish a framework for objective assessment of such techniques. This monograph has a dedicated chapter describing various fusion performance measures that are applicable to hyperspectral image fusion. This monograph also presents a notion of consistency of a fusion technique which can be used to verify the suitability and applicability of a technique for fusion of a very large number of images. This book will be a highly useful resource to the students, researchers, academicians and practitioners in the specific area of hyperspectral image fusion, as well as generic image fusion.
Ambulation Analysis in Wearable ECG demonstrates why, due to recent developments, the wearable ECG recorder substantiates a significant innovation in the healthcare field. About this book: Examines the viability of wearable ECG in cardiac monitoring Includes chapters written by practitioners who have personally developed such hardware to write about the hardware details Bridges the gap between hardware and algorithmic developments with chapters that specifically discuss the hardware aspects and their corresponding calibration issues Presents a useful text for both practitioners and researchers in biomedical engineering and related interdisciplinary fields Assumes basic familiarity with digital signal processing and linear algebra.
This book presents various video processing methodologies that are useful for distance education. The motivation is to devise new multimedia technologies that are suitable for better representation of instructional videos by exploiting the temporal redundancies present in the original video. This solves many of the issues related to the memory and bandwidth limitation of lecture videos. The various methods described in the book focus on a key-frame based approach which is used to time shrink, repackage and retarget instructional videos. All the methods need a preprocessing step of shot detection and recognition, which is separately given as a chapter. We find those frames which are well-written and distinct as key-frames. A super-resolution based image enhancement scheme is suggested for refining the key-frames for better legibility. These key-frames, along with the audio and a meta-data for the mutual linkage among various media components form a repackaged lecture video, which on a programmed playback, render an estimate of the original video but at a substantially compressed form. The book also presents a legibility retentive retargeting of this instructional media on mobile devices with limited display size. All these technologies contribute to the enhancement of the outreach of distance education programs. Distance education is now a big business with an annual turnover of over 10-12 billion dollars. We expect this to increase rapidly. Use of the proposed technology will help deliver educational videos to those who are less endowed in terms of network bandwidth availability and to those everywhere who are even on a move by delivering it effectively to mobile handsets (including PDAs). Thus, technology developers, practitioners, and content providers will find the material very useful.
Super-Resolution Imaging serves as an essential reference for both academicians and practicing engineers. It can be used both as a text for advanced courses in imaging and as a desk reference for those working in multimedia, electrical engineering, computer science, and mathematics. The first book to cover the new research area of super-resolution imaging, this text includes work on the following groundbreaking topics: * Image zooming based on wavelets and generalized interpolation; * Super-resolution from sub-pixel shifts; * Use of blur as a cue; * Use of warping in super-resolution; * Resolution enhancement using multiple apertures; * Super-resolution from motion data; * Super-resolution from compressed video; * Limits in super-resolution imaging. Written by the leading experts in the field, Super-Resolution Imaging presents a comprehensive analysis of current technology, along with new research findings and directions for future work.
Depth recovery is important in machine vision applications when a 3-dimensional structure must be derived from 2-dimensional images. This is an active area of research with applications ranging from industrial robotics to military imaging. This book provides the comprehensive details of the methodology, along with the complete mathematics and algorithms involved. Many new models, both deterministic and statistical, are introduced.
Motion-Free Super-Resolution is a compilation of very recent work on various methods of generating super-resolution (SR) images from a set of low-resolution images. The current literature on this topic deals primarily with the use of motion cues for the purpose of generating SR images. These cues have, it is shown, their advantages and disadvantages. In contrast, this book shows that cues other than motion can also be used for the same purpose, and addresses both the merits and demerits of these new techniques. Motion-Free Super-Resolution supersedes much of the lead author 's previous edited volume, "Super-Resolution Imaging," and includes an up-to-date account of the latest research efforts in this fast-moving field. This sequel also features a style of presentation closer to that of a textbook, with an emphasis on teaching and explanation rather than scholarly presentation.
This book presents and analyzes methods to perform image co-segmentation. In this book, the authors describe efficient solutions to this problem ensuring robustness and accuracy, and provide theoretical analysis for the same. Six different methods for image co-segmentation are presented. These methods use concepts from statistical mode detection, subgraph matching, latent class graph, region growing, graph CNN, conditional encoder-decoder network, meta-learning, conditional variational encoder-decoder, and attention mechanisms. The authors have included several block diagrams and illustrative examples for the ease of readers. This book is a highly useful resource to researchers and academicians not only in the specific area of image co-segmentation but also in related areas of image processing, graph neural networks, statistical learning, and few-shot learning.
|
You may like...
|