![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Image processing > General
This book constitutes selected, revised and extended papers from the 13th International Conference on Computer Supported Education, CSEDU 2021, held as a virtual event in April 2021. The 27 revised full papers were carefully reviewed and selected from 143 submissions. They were organized in topical sections as follows: artificial intelligence in education; information technologies supporting learning; learning/teaching methodologies and assessment; social context and learning environments; ubiquitous learning; current topics.
This volume aims to stimulate discussions on research involving the use of data and digital images as an understanding approach for analysis and visualization of phenomena and experiments. The emphasis is put not only on graphically representing data as a way of increasing its visual analysis, but also on the imaging systems which contribute greatly to the comprehension of real cases. Scientific Visualization and Imaging Systems encompass multidisciplinary areas, with applications in many knowledge fields such as Engineering, Medicine, Material Science, Physics, Geology, Geographic Information Systems, among others. This book is a selection of 13 revised and extended research papers presented in the International Conference on Advanced Computational Engineering and Experimenting -ACE-X conferences 2010 (Paris), 2011 (Algarve), 2012 (Istanbul) and 2013 (Madrid). The examples were particularly chosen from materials research, medical applications, general concepts applied in simulations and image analysis and other interesting related problems.
This book describes the technical problems and solutions for automatically recognizing and parsing a medical image into multiple objects, structures, or anatomies. It gives all the key methods, including state-of- the-art approaches based on machine learning, for recognizing or detecting, parsing or segmenting, a cohort of anatomical structures from a medical image. Written by top experts in Medical Imaging, this book is ideal for university researchers and industry practitioners in medical imaging who want a complete reference on key methods, algorithms and applications in medical image recognition, segmentation and parsing of multiple objects. Learn: Research challenges and problems in medical image recognition, segmentation and parsing of multiple objects Methods and theories for medical image recognition, segmentation and parsing of multiple objects Efficient and effective machine learning solutions based on big datasets Selected applications of medical image parsing using proven algorithms
- the book can be used by beginners in the field, tracking from basic principles to how to bend the rules, in reader-friendly language throughout - the book is based on a popular blog which dovetails as a fantastic companion website: https://questionsindataviz.com/ - the author is a very experienced and well-respected practitioner in the field, with a good-size following on social media: https://twitter.com/theneilrichards
This state-of-the-art set of handbooks provides medical physicists with a comprehensive overview of the field of nuclear medicine. In addition to describing the underlying, fundamental theories of the field, it includes the latest research and explores the practical procedures, equipment, and regulations that are shaping the field and it's future. This set is split into three volumes, respectively titled: Instrumentation and Imaging Procedures; Modelling, Dosimetry and Radiation Protection; and Radiopharmaceuticals and Clinical Applications. Volume one, Instrumentation and Imaging Procedures, focuses primarily on providing a comprehensive review into the detection of radiation, beginning with an introduction to the history of nuclear medicine to the latest imaging technology. Volume two, Modelling, Dosimetry and Radiation Protection, explores the applications of mathematical modelling, dosimetry, and radiation protection in nuclear medicine. The third and final volume, Radiopharmaceuticals and Clinical Applications, highlights the production and application of radiopharmaceuticals and their role in clinical nuclear medicine practice. These books will be an invaluable resource for libraries, institutions, and clinical and academic medical physicists searching for a complete account of what defines nuclear medicine. The most comprehensive reference available providing a state-of-the-art overview of the field of nuclear medicine Edited by a leader in the field, with contributions from a team of experienced medical physicists, chemists, engineers, scientists, and clinical medical personnel Includes the latest practical research in the field, in addition to explaining fundamental theory and the field's history
Principles of Synthetic Aperture Radar Imaging: A System Simulation Approach demonstrates the use of image simulation for SAR. It covers the various applications of SAR (including feature extraction, target classification, and change detection), provides a complete understanding of SAR principles, and illustrates the complete chain of a SAR operation. The book places special emphasis on a ground-based SAR, but also explains space and air-borne systems. It contains chapters on signal speckle, radar-signal models, sensor-trajectory models, SAR-image focusing, platform-motion compensation, and microwave-scattering from random media. While discussing SAR image focusing and motion compensation, it presents processing algorithms and applications that feature extraction, target classification, and change detection. It also provides samples of simulation on various scenarios, and includes simulation flowcharts and results that are detailed throughout the book. Introducing SAR imaging from a systems point of view, the author: Considers the recent development of MIMO SAR technology Includes selected GPU implementation Provides a numerical analysis of system parameters (including platforms, sensor, and image focusing, and their influence) Explores wave-target interactions, signal transmission and reception, image formation, motion compensation Covers all platform motion compensation and error analysis, and their impact on final image radiometric and geometric quality Describes a ground-based SFMCW system Principles of Synthetic Aperture Radar Imaging: A System Simulation Approach is dedicated to the use, study, and development of SAR systems. The book focuses on image formation or focusing, treats platform motion and image focusing, and is suitable for students, radar engineers, and microwave remote sensing researchers.
The main subject of the monograph is the fractional calculus in the discrete version. The volume is divided into three main parts. Part one contains a theoretical introduction to the classical and fractional-order discrete calculus where the fundamental role is played by the backward difference and sum. In the second part, selected applications of the discrete fractional calculus in the discrete system control theory are presented. In the discrete system identification, analysis and synthesis, one can consider integer or fractional models based on the fractional-order difference equations. The third part of the book is devoted to digital image processing.
In the current age of information technology, the issues of distributing and utilizing images efficiently and effectively are of substantial concern. Solutions to many of the problems arising from these issues are provided by techniques of image processing, among which segmentation and compression are topics of this book. Image segmentation is a process for dividing an image into its constituent parts. For block-based segmentation using statistical classification, an image is divided into blocks and a feature vector is formed for each block by grouping statistics of its pixel intensities. Conventional block-based segmentation algorithms classify each block separately, assuming independence of feature vectors. Image Segmentation and Compression Using Hidden Markov Models presents a new algorithm that models the statistical dependence among image blocks by two dimensional hidden Markov models (HMMs). Formulas for estimating the model according to the maximum likelihood criterion are derived from the EM algorithm. To segment an image, optimal classes are searched jointly for all the blocks by the maximum a posteriori (MAP) rule. The 2-D HMM is extended to multiresolution so that more context information is exploited in classification and fast progressive segmentation schemes can be formed naturally. The second issue addressed in the book is the design of joint compression and classification systems using the 2-D HMM and vector quantization. A classifier designed with the side goal of good compression often outperforms one aimed solely at classification because overfitting to training data is suppressed by vector quantization. Image Segmentation and Compression Using Hidden Markov Models is an essential reference source for researchers and engineers working in statistical signal processing or image processing, especially those who are interested in hidden Markov models. It is also of value to those working on statistical modeling.
A timely and authoritative guide to the state of the art of wave scattering Scattering of Electromagnetic Waves offers in three volumes a complete and up-to-date treatment of wave scattering by random discrete scatterers and rough surfaces. Written by leading scientists who have made important contributions to wave scattering over three decades, this new work explains the principles, methods, and applications of this rapidly expanding, interdisciplinary field. It covers both introductory and advanced material and provides students and researchers in remote sensing as well as imaging, optics, and electromagnetic theory with a one-stop reference to a wealth of current research results. Plus, Scattering of Electromagnetic Waves contains detailed discussions of both analytical and numerical methods, including cutting-edge techniques for the recovery of earth/land parametric information. The three volumes are entitled respectively Theories and Applications, Numerical Simulation, and Advanced Topics. In the first volume, Theories and Applications, Leung Tsang (University of Washington) Jin Au Kong (MIT), and Kung-Hau Ding (Air Force Research Lab) cover:
The arrival of the digital age has created the need to be able to store, manage, and digitally use an ever-increasing amount of video and audio material. Thus, video cataloguing has emerged as a requirement of the times. Video Cataloguing: Structure Parsing and Content Extraction explains how to efficiently perform video structure analysis as well as extract the basic semantic contents for video summarization, which is essential for handling large-scale video data. This book addresses the issues of video cataloguing, including video structure parsing and basic semantic word extraction, particularly for movie and teleplay videos. It starts by providing readers with a fundamental understanding of video structure parsing. It examines video shot boundary detection, recent research on video scene detection, and basic ideas for semantic word extraction, including video text recognition, scene recognition, and character identification. The book lists and introduces some of the most commonly used features in video analysis. It introduces and analyzes the most popular shot boundary detection methods and also presents recent research on movie scene detection as another important and critical step for video cataloguing, video indexing, and retrieval. The authors propose a robust movie scene recognition approach based on a panoramic frame and representative feature patch. They describe how to recognize characters in movies and TV series accurately and efficiently as well as how to use these character names as cataloguing items for an intelligent catalogue. The book proposes an interesting application of highlight extraction in basketball videos and concludes by demonstrating how to design and implement a prototype system of automatic movie and teleplay cataloguing (AMTC) based on the approaches introduced in the book.
VipIMAGE 2015 contains invited lectures and full papers presented at VIPIMAGE 2015 - V ECCOMAS Thematic Conference on Computational Vision and Medical Image Processing (Tenerife, Canary Islands, Spain, 19-21 October, 2015). International contributions from 19 countries provide a comprehensive coverage of the current state-of-the-art in the fields of: 3D Vision; Computational Bio-Imaging and Visualization; Computational Vision; Computer Aided Diagnosis, Surgery, Therapy and Treatment; Data Interpolation, Registration, Acquisition and Compression; Industrial Inspection; Image Enhancement; Image Processing and Analysis; Image Segmentation; Medical Imaging; Medical Rehabilitation; Physics of Medical Imaging; Shape Reconstruction; Signal Processing; Simulation and Modelling; Software Development for Image Processing and Analysis; Telemedicine Systems and their Applications; Tracking and Analysis of Movement and Deformation; Virtual Reality. Computational Vision and Medical Image Processing. VipIMAGE 2015 will be useful to academics, researchers and professionals in Biomechanics, Biomedical Engineering, Computational Vision (image processing and analysis), Computer Sciences, Computational Mechanics, Signal Processing, Medicine and Rehabilitation.
As science becomes increasingly computational, the limits of what is computationally tractable become a barrier to scientific progress. Many scientific problems, however, are amenable to human problem solving skills that complement computational power. By leveraging these skills on a larger scale - beyond the relatively few individuals currently engaged in scientific inquiry - there is the potential for new scientific discoveries. This book presents a framework for mapping open scientific problems into video games. The game framework combines computational power with human problem solving and creativity to work toward solving scientific problems that neither computers nor humans could previously solve alone. To maximize the potential contributors to scientific discovery, the framework designs a game to be played by people with no formal scientific background and incentivizes long- term engagement with a myriad of collaborative on competitive reward structures. The framework allows for the continual coevolution of the players and the game to each other: as players gain expertise through gameplay, the game changes to become a better tool. The framework is validated by being applied to proteomics problems with the video game Foldit. Foldit players have contributed to novel discoveries in protein structure prediction, protein design, and protein structure refinement algorithms. The coevolution of human problem solving and computer tools in an incentivized game framework is an exciting new scientific pathway that can lead to discoveries currently unreachable by other methods.
This interdisciplinary study participates in the ongoing critical conversation about postwar American poetry and visual culture, while advancing that field into the arena of the museum. Turning to contemporary poems about the visual arts that foreground and interrogate a museum setting, the book demonstrates the particular importance of the museum as a cultural site that is both inspiration and provocation for poets. The study uniquely bridges the "dual canon" in contemporary poetry (and calls the lyric/avant-garde distinction into question) by analyzing museum-sponsored anthologies as well as poems by John Ashbery, Richard Howard, Kenneth Koch, Kathleen Fraser, Cole Swensen, Anne Carson, and others. Through these case studies of poets with diverse affiliations, the author shows that the boom in ekphrasis in the past 20 years is not only an aesthetic but a critical phenomenon, a way that poets have come to terms with the critical dilemmas of our moment. Highlighting the importance of poets' "peripheral vision"-awareness of the institutional conditions that frame encounters with art-the author contend that a museum visit becomes a forum for questioning oppositions that have preoccupied literary criticism for the past 50 years: homage and innovation, modernism and postmodernism, subjectivity and collectivity. The study shows that ekphrasis becomes a strategy for negotiating these impasses-a mode of political inquiry, a meditation on canonization, a venue for comic appraisal of institutionalization, and a means of "site-specific" feminist revision-in a vital synthesis of critique, perspicacity, and pleasure.
The slime mould Physarum polycephalum is a large cell visible by the unaided eye. It behaves as an intelligent nonlinear spatially extended active medium encapsulated in an elastic membrane. The cell optimises its growth patterns in configurations of attractants and repellents. This behaviour is interpreted as computation. Numerous prototypes of slime mould computers were designed to solve problems of computational geometry, graphs and transport networks and to implement universal computing circuits.In this unique set of scientific photographs and micrographs, the leading experts in computer science, biology, chemistry and material science illustrate in superb detail the nature of the slime mould computers and hybrid devices. Every photograph or micrograph in this book is of real scientific, theoretical or technological interest. Each entry includes a self-contained description of how the visualised phenomenon is used in the relevant slime mould computer. This atlas is unique in providing the depth and breadth of knowledge in harnessing behaviour of the slime mould to perform computation. It will help readers to understand how exploitation of biological processes has sparked new ideas and spurred progress in many fields of science and engineering.
This book presents a selection of chapters, written by leading international researchers, related to the automatic analysis of gestures from still images and multi-modal RGB-Depth image sequences. It offers a comprehensive review of vision-based approaches for supervised gesture recognition methods that have been validated by various challenges. Several aspects of gesture recognition are reviewed, including data acquisition from different sources, feature extraction, learning, and recognition of gestures.
Background modeling and foreground detection are important steps in video processing used to detect robustly moving objects in challenging environments. This requires effective methods for dealing with dynamic backgrounds and illumination changes as well as algorithms that must meet real-time and low memory requirements. Incorporating both established and new ideas, Background Modeling and Foreground Detection for Video Surveillance provides a complete overview of the concepts, algorithms, and applications related to background modeling and foreground detection. Leaders in the field address a wide range of challenges, including camera jitter and background subtraction. The book presents the top methods and algorithms for detecting moving objects in video surveillance. It covers statistical models, clustering models, neural networks, and fuzzy models. It also addresses sensors, hardware, and implementation issues and discusses the resources and datasets required for evaluating and comparing background subtraction algorithms. The datasets and codes used in the text, along with links to software demonstrations, are available on the book's website. A one-stop resource on up-to-date models, algorithms, implementations, and benchmarking techniques, this book helps researchers and industry developers understand how to apply background models and foreground detection methods to video surveillance and related areas, such as optical motion capture, multimedia applications, teleconferencing, video editing, and human-computer interfaces. It can also be used in graduate courses on computer vision, image processing, real-time architecture, machine learning, or data mining.
Broad in scope, Semantic Multimedia Analysis and Processing provides a complete reference of techniques, algorithms, and solutions for the design and the implementation of contemporary multimedia systems. Offering a balanced, global look at the latest advances in semantic indexing, retrieval, analysis, and processing of multimedia, the book features the contributions of renowned researchers from around the world. Its contents are based on four fundamental thematic pillars: 1) information and content retrieval, 2) semantic knowledge exploitation paradigms, 3) multimedia personalization, and 4) human-computer affective multimedia interaction. Its 15 chapters cover key topics such as content creation, annotation and modeling for the semantic web, multimedia content understanding, and efficiency and scalability. Fostering a deeper understanding of a popular area of research, the text: Describes state-of-the-art schemes and applications Supplies authoritative guidance on research and deployment issues Presents novel methods and applications in an informative and reproducible way Contains numerous examples, illustrations, and tables summarizing results from quantitative studies Considers ongoing trends and designates future challenges and research perspectives Includes bibliographic links for further exploration Uses both SI and US units Ideal for engineers and scientists specializing in the design of multimedia systems, software applications, and image/video analysis and processing technologies, Semantic Multimedia Analysis and Processing aids researchers, practitioners, and developers in finding innovative solutions to existing problems, opening up new avenues of research in uncharted waters.
The problem of dealing with missing or incomplete data in machine learning and computer vision arises in many applications. Recent strategies make use of generative models to impute missing or corrupted data. Advances in computer vision using deep generative models have found applications in image/video processing, such as denoising, restoration, super-resolution, or inpainting. Inpainting and Denoising Challenges comprises recent efforts dealing with image and video inpainting tasks. This includes winning solutions to the ChaLearn Looking at People inpainting and denoising challenges: human pose recovery, video de-captioning and fingerprint restoration. This volume starts with a wide review on image denoising, retracing and comparing various methods from the pioneer signal processing methods, to machine learning approaches with sparse and low-rank models, and recent deep learning architectures with autoencoders and variants. The following chapters present results from the Challenge, including three competition tasks at WCCI and ECML 2018. The top best approaches submitted by participants are described, showing interesting contributions and innovating methods. The last two chapters propose novel contributions and highlight new applications that benefit from image/video inpainting.
Synthetic aperture radar provides broad-area imaging at high
resolutions, which is used in applications such as environmental
monitoring, earth-resource mapping, and military systems.
This book analyzes techniques that use the direct and inverse fuzzy transform for image processing and data analysis. The book is divided into two parts, the first of which describes methods and techniques that use the bi-dimensional fuzzy transform method in image analysis. In turn, the second describes approaches that use the multidimensional fuzzy transform method in data analysis. An F-transform in one variable is defined as an operator which transforms a continuous function f on the real interval [a,b] in an n-dimensional vector by using n-assigned fuzzy sets A1, ... , An which constitute a fuzzy partition of [a,b]. Then, an inverse F-transform is defined in order to convert the n-dimensional vector output in a continuous function that equals f up to an arbitrary quantity . We may limit this concept to the finite case by defining the discrete F-transform of a function f in one variable, even if it is not known a priori. A simple extension of this concept to functions in two variables allows it to be used for the coding/decoding and processing of images. Moreover, an extended version with multidimensional functions can be used to address a host of topics in data analysis, including the analysis of large and very large datasets. Over the past decade, many researchers have proposed applications of fuzzy transform techniques for various image processing topics, such as image coding/decoding, image reduction, image segmentation, image watermarking and image fusion; and for such data analysis problems as regression analysis, classification, association rule extraction, time series analysis, forecasting, and spatial data analysis. The robustness, ease of use, and low computational complexity of fuzzy transforms make them a powerful fuzzy approximation tool suitable for many computer science applications. This book presents methods and techniques based on the use of fuzzy transforms in various applications of image processing and data analysis, including image segmentation, image tamper detection, forecasting, and classification, highlighting the benefits they offer compared with traditional methods. Emphasis is placed on applications of fuzzy transforms to innovative problems, such as massive data mining, and image and video security in social networks based on the application of advanced fragile watermarking systems. This book is aimed at researchers, students, computer scientists and IT developers to acquire the knowledge and skills necessary to apply and implement fuzzy transforms-based techniques in image and data analysis applications.
This book provides a full representation of Inverse Synthetic Aperture Radar (ISAR) imagery, which is a popular and important radar signal processing tool. The book covers all possible aspects of ISAR imaging. The book offers a fair amount of signal processing techniques and radar basics before introducing the inverse problem of ISAR and the forward problem of Synthetic Aperture Radar (SAR). Important concepts of SAR such as resolution, pulse compression and image formation are given together with associated MATLAB codes. After providing the fundamentals for ISAR imaging, the book gives the detailed imaging procedures for ISAR imaging with associated MATLAB functions and codes. To enhance the image quality in ISAR imaging, several imaging tricks and fine-tuning procedures such as zero-padding and windowing are also presented. Finally, various real applications of ISAR imagery, like imaging the antenna-platform scattering, are given in a separate chapter. For all these algorithms, MATLAB codes and figures are included. The final chapter considers advanced concepts and trends in ISAR imaging.
The scope and importance of colour image science have grown rapidly in recent years. In parallel with the proliferation of consumer imaging products, the capabilities of colour displays, printers and digital cameras increase. New challenges for colour image science are emerging as cross-media image reproduction is applied in Internet and multimedia displays, motion pictures, digital television and augmented-reality systems.
In the last few years, biometric techniques have proven their ability to provide secure access to shared resources in various domains. Furthermore, software agents and multi-agent systems (MAS) have shown their efficiency in resolving critical network problems. Iris Biometric Model for Secured Network Access proposes a new model, the IrisCryptoAgentSystem (ICAS), which is based on a biometric method for authentication using the iris of the eyes and an asymmetric cryptography method using "Rivest-Shamir-Adleman" (RSA) in an agent-based architecture. It focuses on the development of new methods in biometric authentication in order to provide greater efficiency in the ICAS model. It also covers the pretopological aspects in the development of the indexed hierarchy to classify DRVA iris templates. The book introduces biometric systems, cryptography, and multi-agent systems (MAS) and explains how they can be used to solve security problems in complex systems. Examining the growing interest to exploit MAS across a range of fields through the integration of various features of agents, it also explains how the intersection of biometric systems, cryptography, and MAS can apply to iris recognition for secure network access. The book presents the various conventional methods for the localization of external and internal edges of the iris of the eye based on five simulations and details the effectiveness of each. It also improves upon existing methods for the localization of the external and internal edges of the iris and for removing the intrusive effects of the eyelids.
This book is devoted to the issue of image super-resolution-obtaining high-resolution images from single or multiple low-resolution images. Although there are numerous algorithms available for image interpolation and super-resolution, there's been a need for a book that establishes a common thread between the two processes. Filling this need, Image Super-Resolution and Applications presents image interpolation as a building block in the super-resolution reconstruction process. Instead of approaching image interpolation as either a polynomial-based problem or an inverse problem, this book breaks the mold and compares and contrasts the two approaches. It presents two directions for image super-resolution: super-resolution with a priori information and blind super-resolution reconstruction of images. It also devotes chapters to the two complementary steps used to obtain high-resolution images: image registration and image fusion. Details techniques for color image interpolation and interpolation for pattern recognition Analyzes image interpolation as an inverse problem Presents image registration methodologies Considers image fusion and its application in image super resolution Includes simulation experiments along with the required MATLAB (R) code Supplying complete coverage of image-super resolution and its applications, the book illustrates applications for image interpolation and super-resolution in medical and satellite image processing. It uses MATLAB (R) programs to present various techniques, including polynomial image interpolation and adaptive polynomial image interpolation. MATLAB codes for most of the simulation experiments supplied in the book are included in the appendix.
This book gives a concise and comprehensive overview of non-cooperative target tracking, fusion and control. Focusing on algorithms rather than theories for non-cooperative targets including air and space-borne targets, this work explores a number of advanced techniques, including Gaussian mixture cardinalized probability hypothesis density (CPHD) filter, optimization on manifold, construction of filter banks and tight frames, structured sparse representation, and others. Containing a variety of illustrative and computational examples, Non-cooperative Target Tracking, Fusion and Control will be useful for students as well as engineers with an interest in information fusion, aerospace applications, radar data processing and remote sensing. |
You may like...
Cardiovascular and Coronary Artery…
Ayman S. El-Baz, Jasjit S. Suri
Paperback
R3,802
Discovery Miles 38 020
Next-Generation Applications and…
Filipe Portela, Ricardo Queiros
Hardcover
R6,648
Discovery Miles 66 480
Handbook of Pediatric Brain Imaging…
Hao Huang, Timothy Roberts
Paperback
R3,531
Discovery Miles 35 310
Cognitive Systems and Signal Processing…
Yudong Zhang, Arun Kumar Sangaiah
Paperback
R2,587
Discovery Miles 25 870
Handbook of Visual Communications
Hseuh-Ming Hang, John W. Woods
Hardcover
R1,227
Discovery Miles 12 270
Handbook of Medical Image Computing and…
S. Kevin Zhou, Daniel Rueckert, …
Hardcover
R4,574
Discovery Miles 45 740
|