![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Image processing
This book serves as the first guideline of the integrative approach, optimal for our new and young generations. Recent technology advancements in computer vision, IoT sensors, and analytics open the door to highly impactful innovations and applications as a result of effective and efficient integration of those. Such integration has brought to scientists and engineers a new approach -the integrative approach. This offers far more rapid development and scalable architecting when comparing to the traditional hardcore developmental approach. Featuring biomedical and healthcare challenges including COVID-19, we present a collection of carefully selective cases with significant added- values as a result of integrations, e.g., sensing with AI, analytics with different data sources, and comprehensive monitoring with many different sensors, while sustaining its readability.
This book features selected papers presented at the 3rd International Conference on Recent Innovations in Computing (ICRIC 2020), held on 20-21 March 2020 at the Central University of Jammu, India, and organized by the university's Department of Computer Science & Information Technology. It includes the latest research in the areas of software engineering, cloud computing, computer networks and Internet technologies, artificial intelligence, information security, database and distributed computing, and digital India.
This second edition of G. Winkler's successful book on random field approaches to image analysis, related Markov Chain Monte Carlo methods, and statistical inference with emphasis on Bayesian image analysis concentrates more on general principles and models and less on details of concrete applications. Addressed to students and scientists from mathematics, statistics, physics, engineering, and computer science, it will serve as an introduction to the mathematical aspects rather than a survey. Basically no prior knowledge of mathematics or statistics is required.The second edition is in many parts completely rewritten and improved, and most figures are new. The topics of exact sampling and global optimization of likelihood functions have been added. This second edition comes with a CD-ROM by F. Friedrich,containing a host of (live) illustrations for each chapter. In an interactive environment, readers can perform their own experiments to consolidate the subject.
The idea for this text emerged over several years as the authors participated in research projects related to analysis of data from NASA's RHESSI Small Explorer mission. The data produced over the operational lifetime of this mission inspired many investigations related to a specific science question: the when, where, and how of electron acceleration during solar flares in the stressed magnetic environment of the active Sun. A vital key to unlocking this science problem is the ability to produce high-quality images of hard X-rays produced by bremsstrahlung radiation from electrons accelerated during a solar flare. The only practical way to do this within the technological and budgetary limitations of the RHESSI era was to opt for indirect modalities in which imaging information is encoded as a set of two-dimensional spatial Fourier components. Radio astronomers had employed Fourier imaging for many years. However, differently than for radio astronomy, X-ray images produced by RHESSI had to be constructed from a very limited number of sparsely distributed and very noisy Fourier components. Further, Fourier imaging is hardly intuitive, and extensive validation of the methods was necessary to ensure that they produced images with sufficient accuracy and fidelity for scientific applications. This book summarizes the results of this development of imaging techniques specifically designed for this form of data. It covers a set of published works that span over two decades, during which various imaging methods were introduced, validated, and applied to observations. Also considering that a new Fourier-based telescope, STIX, is now entering its nominal phase on-board the ESA Solar Orbiter, it became more and more apparent to the authors that it would be a good idea to put together a compendium of these imaging methods and their applications. Hence the book you are now reading.
This book gives a start-to-finish overview of the whole Fish4Knowledge project, in 18 short chapters, each describing one aspect of the project. The Fish4Knowledge project explored the possibilities of big video data, in this case from undersea video. Recording and analyzing 90 thousand hours of video from ten camera locations, the project gives a 3 year view of fish abundance in several tropical coral reefs off the coast of Taiwan. The research system built a remote recording network, over 100 Tb of storage, supercomputer processing, video target detection and tracking, fish species recognition and analysis, a large SQL database to record the results and an efficient retrieval mechanism. Novel user interface mechanisms were developed to provide easy access for marine ecologists, who wanted to explore the dataset. The book is a useful resource for system builders, as it gives an overview of the many new methods that were created to build the Fish4Knowledge system in a manner that also allows readers to see how all the components fit together.
This book introduces the point cloud; its applications in industry, and the most frequently used datasets. It mainly focuses on three computer vision tasks -- point cloud classification, segmentation, and registration -- which are fundamental to any point cloud-based system. An overview of traditional point cloud processing methods helps readers build background knowledge quickly, while the deep learning on point clouds methods include comprehensive analysis of the breakthroughs from the past few years. Brand-new explainable machine learning methods for point cloud learning, which are lightweight and easy to train, are then thoroughly introduced. Quantitative and qualitative performance evaluations are provided. The comparison and analysis between the three types of methods are given to help readers have a deeper understanding. With the rich deep learning literature in 2D vision, a natural inclination for 3D vision researchers is to develop deep learning methods for point cloud processing. Deep learning on point clouds has gained popularity since 2017, and the number of conference papers in this area continue to increase. Unlike 2D images, point clouds do not have a specific order, which makes point cloud processing by deep learning quite challenging. In addition, due to the geometric nature of point clouds, traditional methods are still widely used in industry. Therefore, this book aims to make readers familiar with this area by providing comprehensive overview of the traditional methods and the state-of-the-art deep learning methods. A major portion of this book focuses on explainable machine learning as a different approach to deep learning. The explainable machine learning methods offer a series of advantages over traditional methods and deep learning methods. This is a main highlight and novelty of the book. By tackling three research tasks -- 3D object recognition, segmentation, and registration using our methodology -- readers will have a sense of how to solve problems in a different way and can apply the frameworks to other 3D computer vision tasks, thus give them inspiration for their own future research. Numerous experiments, analysis and comparisons on three 3D computer vision tasks (object recognition, segmentation, detection and registration) are provided so that readers can learn how to solve difficult Computer Vision problems.
This book constitutes the refereed post-conference proceedings of the First IFIP TC 5 International Conference on Computer Science Protecting Human Society Against Epidemics, ANTICOVID 2021, held virtually in June 2021.The 7 full and 4 short papers presented were carefully reviewed and selected from 20 submissions. The papers are concerned with a very large spectrum of problems, ranging from linguistics for automatic translation of medical terms, to a proposition for a worldwide system of fast reaction to emerging pandemic.
The material presented in this book originates from the first Eurographics Workshop on Graphics Hardware, held in Lisbon, Portugal, in August 1986. Leading experts in the field present the state of their ongoing graphics hardware projects and give their individual views of future developments. The final versions of the contributions are written in the light of in-depth discussions at the workshop. The book starts a series of EurographicSeminars volumes on the state-of-the-art of graphics hardware. This volume presents a unique collection of material which covers the following topics: workstation architectures, traffic simulators, hardware support for geometric modeling, ray-tracing. It therefore will be of interest to all computer graphics professionals wishing to gain a deeper knowledge of graphics hardware.
Offering a practical alternative to the conventional methods used in signal processing applications, this book discloses numerical techniques and explains how to evaluate the frequency-domain attributes of a waveform without resorting to actual transformation through Fourier methods. This book should prove of interest to practitioners in any field who may require the analysis, association, recognition or processing of signals, and undergraduate students of signal processing.
Markov models are extremely useful as a general, widely applicable tool for many areas in statistical pattern recognition. This unique text/reference places the formalism of Markov chain and hidden Markov models at the very center of its examination of current pattern recognition systems, demonstrating how the models can be used in a range of different applications. Thoroughly revised and expanded, this new edition now includes a more detailed treatment of the EM algorithm, a description of an efficient approximate Viterbi-training procedure, a theoretical derivation of the perplexity measure, and coverage of multi-pass decoding based on "n"-best search. Supporting the discussion of the theoretical foundations of Markov modeling, special emphasis is also placed on practical algorithmic solutions. Topics and features: introduces the formal framework for Markov models, describing hidden Markov models and Markov chain models, also known as n-gram models; covers the robust handling of probability quantities, which are omnipresent when dealing with these statistical methods; presents methods for the configuration of hidden Markov models for specific application areas, explaining the estimation of the model parameters; describes important methods for efficient processing of Markov models, and the adaptation of the models to different tasks; examines algorithms for searching within the complex solution spaces that result from the joint application of Markov chain and hidden Markov models; reviews key applications of Markov models in automatic speech recognition, character and handwriting recognition, and the analysis of biological sequences. Researchers, practitioners, and graduate students of pattern recognition will all find this book to be invaluable in aiding their understanding of the application of statistical methods in this area.
This book covers virtually all aspects of image formation in medical imaging, including systems based on ionizing radiation (x-rays, gamma rays) and non-ionizing techniques (ultrasound, optical, thermal, magnetic resonance, and magnetic particle imaging) alike. In addition, it discusses the development and application of computer-aided detection and diagnosis (CAD) systems in medical imaging. Also there will be a special track on computer-aided diagnosis on COVID-19 by CT and X-rays images. Given its coverage, the book provides both a forum and valuable resource for researchers involved in image formation, experimental methods, image performance, segmentation, pattern recognition, feature extraction, classifier design, machine learning / deep learning, radiomics, CAD workstation design, human-computer interaction, databases, and performance evaluation.
Computational geometry as an area of research in its own right emerged in the early seventies of this century. Right from the beginning, it was obvious that strong connections of various kinds exist to questions studied in the considerably older field of combinatorial geometry. For example, the combinatorial structure of a geometric problem usually decides which algorithmic method solves the problem most efficiently. Furthermore, the analysis of an algorithm often requires a great deal of combinatorial knowledge. As it turns out, however, the connection between the two research areas commonly referred to as computa tional geometry and combinatorial geometry is not as lop-sided as it appears. Indeed, the interest in computational issues in geometry gives a new and con structive direction to the combinatorial study of geometry. It is the intention of this book to demonstrate that computational and com binatorial investigations in geometry are doomed to profit from each other. To reach this goal, I designed this book to consist of three parts, acorn binatorial part, a computational part, and one that presents applications of the results of the first two parts. The choice of the topics covered in this book was guided by my attempt to describe the most fundamental algorithms in computational geometry that have an interesting combinatorial structure. In this early stage geometric transforms played an important role as they reveal connections between seemingly unrelated problems and thus help to structure the field."
A long long time ago, echoing philosophical and aesthetic principles that existed since antiquity, William of Ockham enounced the principle of parsimony, better known today as Ockham's razor: "Entities should not be multiplied without neces sity. " This principle enabled scientists to select the "best" physical laws and theories to explain the workings of the Universe and continued to guide scienti?c research, leadingtobeautifulresultsliketheminimaldescriptionlength approachtostatistical inference and the related Kolmogorov complexity approach to pattern recognition. However, notions of complexity and description length are subjective concepts anddependonthelanguage"spoken"whenpresentingideasandresults. The?eldof sparse representations, that recently underwent a Big Bang like expansion, explic itly deals with the Yin Yang interplay between the parsimony of descriptions and the "language" or "dictionary" used in them, and it became an extremely exciting area of investigation. It already yielded a rich crop of mathematically pleasing, deep and beautiful results that quickly translated into a wealth of practical engineering applications. You are holding in your hands the ?rst guide book to Sparseland, and I am sure you'll ?nd in it both familiar and new landscapes to see and admire, as well as ex cellent pointers that will help you ?nd further valuable treasures. Enjoy the journey to Sparseland! Haifa, Israel, December 2009 Alfred M. Bruckstein vii Preface This book was originally written to serve as the material for an advanced one semester (fourteen 2 hour lectures) graduate course for engineering students at the Technion, Israel.
This book covers a large set of methods in the field of Artificial Intelligence - Deep Learning applied to real-world problems. The fundamentals of the Deep Learning approach and different types of Deep Neural Networks (DNNs) are first summarized in this book, which offers a comprehensive preamble for further problem-oriented chapters. The most interesting and open problems of machine learning in the framework of Deep Learning are discussed in this book and solutions are proposed. This book illustrates how to implement the zero-shot learning with Deep Neural Network Classifiers, which require a large amount of training data. The lack of annotated training data naturally pushes the researchers to implement low supervision algorithms. Metric learning is a long-term research but in the framework of Deep Learning approaches, it gets freshness and originality. Fine-grained classification with a low inter-class variability is a difficult problem for any classification tasks. This book presents how it is solved, by using different modalities and attention mechanisms in 3D convolutional networks. Researchers focused on Machine Learning, Deep learning, Multimedia and Computer Vision will want to buy this book. Advanced level students studying computer science within these topic areas will also find this book useful.
This volume gathers papers presented at the Workshop on Computational Diffusion MRI (CDMRI 2019), held under the auspices of the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), which took place in Shenzhen, China on October 17, 2019. This book presents the latest advances in the rapidly expanding field of diffusion MRI. It shares new perspectives on the latest research challenges for those currently working in the field, but also offers a valuable starting point for anyone interested in learning about computational techniques in diffusion MRI. The book includes rigorous mathematical derivations, a wealth of rich, full-colour visualisations and extensive clinically relevant results. As such, it will be of interest to researchers and practitioners in the fields of computer science, MRI physics and applied mathematics. Readers will find contributions covering a broad range of topics, from the mathematical foundations of the diffusion process and signal generation, to new computational methods and estimation techniques for the in vivo recovery of microstructural and connectivity features, as well as diffusion-relaxometry and frontline applications in research and clinical practice. This edition includes invited works from high-profile researchers with a specific focus on three new and important topics that are gaining momentum within the diffusion MRI community, including diffusion MRI signal acquisition and processing strategies, machine learning for diffusion MRI, and diffusion MRI outside the brain and clinical applications.
Introduces the reader to the technical aspects of real-time visual effects. Built upon a career of over twenty years in the feature film visual effects and the real-time video game industries and tested on graduate and undergraduate students. Explores all real-time visual effects in four categories: in-camera effects, in-material effects, simulations and particles.
This book provides the tools to enhance the precision, automation and intelligence of modern CNC machining systems. Based on a detailed description of the technical foundations of the machining monitoring system, it develops the general idea of design and implementation of smart machining monitoring systems, focusing on the tool condition monitoring system. The book is structured in two parts. Part I discusses the fundamentals of machining systems, including modeling of machining processes, mathematical basics of condition monitoring and the framework of TCM from a machine learning perspective. Part II is then focused on the applications of these theories. It explains sensory signal processing and feature extraction, as well as the cyber-physical system of the smart machining system. Its utilisation of numerous illustrations and diagrams explain the ideas presented in a clear way, making this book a valuable reference for researchers, graduate students and engineers alike.
MPEG-4 is the multimedia standard for combining interactivity, natural and synthetic digital video, audio and computer-graphics. Typical applications are: internet, video conferencing, mobile videophones, multimedia cooperative work, teleteaching and games. With MPEG-4 the next step from block-based video (ISO/IEC MPEG-1, MPEG-2, CCITT H.261, ITU-T H.263) to arbitrarily-shaped visual objects is taken. This significant step demands a new methodology for system analysis and design to meet the considerably higher flexibility of MPEG-4. Motion estimation is a central part of MPEG-1/2/4 and H.261/H.263 video compression standards and has attracted much attention in research and industry, for the following reasons: it is computationally the most demanding algorithm of a video encoder (about 60-80% of the total computation time), it has a high impact on the visual quality of a video encoder, and it is not standardized, thus being open to competition. Algorithms, Complexity Analysis, and VLSI Architectures for MPEG-4 Motion Estimation covers in detail every single step in the design of a MPEG-1/2/4 or H.261/H.263 compliant video encoder: Fast motion estimation algorithms Complexity analysis tools Detailed complexity analysis of a software implementation of MPEG-4 video Complexity and visual quality analysis of fast motion estimation algorithms within MPEG-4 Design space on motion estimation VLSI architectures Detailed VLSI design examples of (1) a high throughput and (2) a low-power MPEG-4 motion estimator. Algorithms, Complexity Analysis and VLSI Architectures for MPEG-4 Motion Estimation is an important introduction to numerous algorithmic, architectural and system design aspects of the multimedia standard MPEG-4. As such, all researchers, students and practitioners working in image processing, video coding or system and VLSI design will find this book of interest.
Adobe Photoshop Elements Advanced Editing Techniques and Tricks: The Essential Guide to Going Beyond Guided Edits is a must for those who want to go beyond automated features and Guided Edits and delve into the many advanced techniques that are possible using Adobe Photoshop Elements. LEARN HOW TO Perfect editing portrait images by performing techniques such as skin tone correction, frequency separations, skin smoothing, and enhancing facial features Properly edit and enhance a subject's eyes, lips, eyebrows, and facial lighting Apply advanced photo compositing techniques, utilizing rules for controlling perspective Use color grading techniques similar to those used by professional motion picture film editors Delve into advanced tools not included in Photoshop Elements such as curves, color range, selective color, working with color LUTs, and more With detailed step-by-step instructions, this book is targeted to intermediate and advanced users who want to take their photography to the next level. Additional tips using Photoshop Elements can be found on Ted's YouTube channel at www.YouTube/tedpadova.
This book covers virtually all aspects of image formation in medical imaging, including systems based on ionizing radiation (x-rays, gamma rays) and non-ionizing techniques (ultrasound, optical, thermal, magnetic resonance, and magnetic particle imaging) alike. In addition, it discusses the development and application of computer-aided detection and diagnosis (CAD) systems in medical imaging. Given its coverage, the book provides both a forum and valuable resource for researchers involved in image formation, experimental methods, image performance, segmentation, pattern recognition, feature extraction, classifier design, machine learning / deep learning, radiomics, CAD workstation design, human-computer interaction, databases, and performance evaluation.
This book is a collection of selected papers presented at the First Congress on Intelligent Systems (CIS 2020), held in New Delhi, India, during September 5-6, 2020. It includes novel and innovative work from experts, practitioners, scientists, and decision-makers from academia and industry. It covers selected papers in the area of computer vision. This book covers new tools and technologies in some of the important areas of medical science like histopathological image analysis, cancer taxonomy, use of deep learning architecture in dental care, and many more. Furthermore, this book reviews and discusses the use of intelligent learning-based algorithms for increasing the productivity in agricultural domain.
This is the third edition of the first ever book to explore the exciting field of augmented reality art and its enabling technologies. The new edition has been thoroughly revised and updated, with 9 new chapters included. As well as investigating augmented reality as a novel artistic medium, the book covers cultural, social, spatial and cognitive facets of augmented reality art. It has been written by a virtual team of 33 researchers and artists from 11 countries who are pioneering in the new form of art, and contains numerous colour illustrations showing both classic and recent augmented reality artworks. Intended as a starting point for exploring this new fascinating area of research and creative practice, it will be essential reading not only for artists, researchers and technology developers, but also for students (graduates and undergraduates) and all those interested in emerging augmented reality technology and its current and future applications in art.
The goal of this volume is to summarize the state-of-the-art in the utilization of computer vision techniques in the diagnosis of skin cancer. Malignant melanoma is one of the most rapidly increasing cancers in the world. Early diagnosis is particularly important since melanoma can be cured with a simple excision if detected early. In recent years, dermoscopy has proved valuable in visualizing the morphological structures in pigmented lesions. However, it has also been shown that dermoscopy is difficult to learn and subjective. Newer technologies such as infrared imaging, multispectral imaging, and confocal microscopy, have recently come to the forefront in providing greater diagnostic accuracy. These imaging technologies presented in this book can serve as an adjunct to physicians and provide automated skin cancer screening. Although computerized techniques cannot as yet provide a definitive diagnosis, they can be used to improve biopsy decision-making as well as early melanoma detection, especially for patients with multiple atypical nevi.
This fully revised and updated third edition offers students and artists valuable insights into traditional color theory and its practical application using today's cutting-edge technology. The text is lavishly illustrated, stressing issues of contemporary color use and examining how today's artists and designers are using color in a multitude of mediums in their work. It is the only book that has parity between the male and female artists and designers represented, while containing more multicultural and global examples of art and design than any other text. This book begins with how we see color and its biological basis, progressing to the various theories about color and delving into the psychological meaning of color and its use. There are individual chapters on color use in art and design, as well as global and multicultural color use. One chapter investigates cross cultural life events such as marriages and funerals, while examining the six major religions' conceptual and psychological underpinnings of color use. The final chapter explores the future of color. Contemporary Color is the ideal text for color theory courses, but also for beginning art and design students, no matter what their future major discipline or emphasis may be. It provides the foundation on which to build their career and develop their own personal artistic voice and vision. |
You may like...
Become an App Inventor: The Official…
Karen Lang, Selim Tezel, …
Paperback
Essential Java for Scientists and…
Brian Hahn, Katherine Malan
Paperback
R1,266
Discovery Miles 12 660
|