![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Image processing > General
This book is dedicated for engineers and researchers who would like to increase the knowledge in area of mobile mapping systems. Therefore, the flow of the derived information is divided into subproblems corresponding to certain mobile mapping data and related observations' equations. The proposed methodology is not fulfilling all SLAM aspects evident in the literature, but it is based on the experience within the context of the pragmatic and realistic applications. Thus, it can be supportive information for those who are familiar with SLAM and would like to have broader overview in the subject. The novelty is a complete and interdisciplinary methodology for large-scale mobile mapping applications. The contribution is a set of programming examples available as supportive complementary material for this book. All observation equations are implemented, and for each, the programming example is provided. The programming examples are simple C++ implementations that can be elaborated by students or engineers; therefore, the experience in coding is not mandatory. Moreover, since the implementation does not require many additional external programming libraries, it can be easily integrated with any mobile mapping framework. Finally, the purpose of this book is to collect all necessary observation equations and solvers to build computational system capable providing large-scale maps.
This book includes best selected, high-quality research papers presented at the International Conference on Intelligent Manufacturing and Energy Sustainability (ICIMES 2021) held at the Department of Mechanical Engineering, Malla Reddy College of Engineering & Technology (MRCET), Maisammaguda, Hyderabad, India, during June 18-19, 2021. It covers topics in the areas of automation, manufacturing technology and energy sustainability and also includes original works in the intelligent systems, manufacturing, mechanical, electrical, aeronautical, materials, automobile, bioenergy and energy sustainability.
Single and Multi-Objective Evolutionary Computation (MOEA), Genetic Algorithms (GAs), Artificial Neural Networks (ANNs), Fuzzy Controllers (FCs), Particle Swarm Optimization (PSO) and Ant colony Optimization (ACO) are becoming omnipresent in almost every intelligent system design. Unfortunately, the application of the majority of these techniques is complex and so requires a huge computational effort to yield useful and practical results. Therefore, dedicated hardware for evolutionary, neural and fuzzy computation is a key issue for designers. With the spread of reconfigurable hardware such as FPGAs, digital as well as analog hardware implementations of such computation become cost-effective.The idea behind this book is to offer a variety of hardware designs for soft computing techniques that can be embedded in any final product. Also, to introduce the successful application of soft computing technique to solve many hard problems encountered during the design of embedded hardware designs. Reconfigurable embedded designs for GAs, ANNs, FCs and PSO are presented and evaluated. Also, the application of quantum-based evolutionary computation and multi-objective evolutionary computation as well as ACO are applied to solve hard problems related to circuit synthesis, IP assignment, mapping and routing of applications on Network-On-Chip infrastructures."
Adaptive filtering is a classical branch of digital signal processing (DSP). Industrial interest in adaptive filtering grows continuously with the increase in computer performance that allows ever more conplex algorithms to be run in real-time. Change detection is a type of adaptive filtering for non-stationary signals and is also the basic tool in fault detection and diagnosis. Often considered as separate subjects Adaptive Filtering and Change Detection bridges a gap in the literature with a unified treatment of these areas, emphasizing that change detection is a natural extension of adaptive filters, and adaptive filters are the basic building blocks in all change detectors.
The sixth edition has been revised and extended. The whole textbook is now clearly partitioned into basic and advanced material in order to cope with the ever-increasing field of digital image processing. In this way, you can first work your way through the basic principles of digital image processing without getting overwhelmed by the wealth of the material and then extend your studies to selected topics of interest. Each chapter now includes exercises that help you to test your understanding, train your skills, and introduce you to real-world image processing tasks. An important part of the exercises is a wealth of interactive computer exercises, which cover all topics of this textbook. These exercises are performed with the image processing software heurisko, which is included on the accompanying CD-ROM. In this way you can get own practical experience with almost all topics and algorithms covered by this book. The complete hyperlinked text of the book is now available on the accompanying CD-ROM.
This book serves as the first guideline of the integrative approach, optimal for our new and young generations. Recent technology advancements in computer vision, IoT sensors, and analytics open the door to highly impactful innovations and applications as a result of effective and efficient integration of those. Such integration has brought to scientists and engineers a new approach -the integrative approach. This offers far more rapid development and scalable architecting when comparing to the traditional hardcore developmental approach. Featuring biomedical and healthcare challenges including COVID-19, we present a collection of carefully selective cases with significant added- values as a result of integrations, e.g., sensing with AI, analytics with different data sources, and comprehensive monitoring with many different sensors, while sustaining its readability.
This book features selected papers presented at the 3rd International Conference on Recent Innovations in Computing (ICRIC 2020), held on 20-21 March 2020 at the Central University of Jammu, India, and organized by the university's Department of Computer Science & Information Technology. It includes the latest research in the areas of software engineering, cloud computing, computer networks and Internet technologies, artificial intelligence, information security, database and distributed computing, and digital India.
This second edition of G. Winkler's successful book on random field approaches to image analysis, related Markov Chain Monte Carlo methods, and statistical inference with emphasis on Bayesian image analysis concentrates more on general principles and models and less on details of concrete applications. Addressed to students and scientists from mathematics, statistics, physics, engineering, and computer science, it will serve as an introduction to the mathematical aspects rather than a survey. Basically no prior knowledge of mathematics or statistics is required.The second edition is in many parts completely rewritten and improved, and most figures are new. The topics of exact sampling and global optimization of likelihood functions have been added. This second edition comes with a CD-ROM by F. Friedrich,containing a host of (live) illustrations for each chapter. In an interactive environment, readers can perform their own experiments to consolidate the subject.
The idea for this text emerged over several years as the authors participated in research projects related to analysis of data from NASA's RHESSI Small Explorer mission. The data produced over the operational lifetime of this mission inspired many investigations related to a specific science question: the when, where, and how of electron acceleration during solar flares in the stressed magnetic environment of the active Sun. A vital key to unlocking this science problem is the ability to produce high-quality images of hard X-rays produced by bremsstrahlung radiation from electrons accelerated during a solar flare. The only practical way to do this within the technological and budgetary limitations of the RHESSI era was to opt for indirect modalities in which imaging information is encoded as a set of two-dimensional spatial Fourier components. Radio astronomers had employed Fourier imaging for many years. However, differently than for radio astronomy, X-ray images produced by RHESSI had to be constructed from a very limited number of sparsely distributed and very noisy Fourier components. Further, Fourier imaging is hardly intuitive, and extensive validation of the methods was necessary to ensure that they produced images with sufficient accuracy and fidelity for scientific applications. This book summarizes the results of this development of imaging techniques specifically designed for this form of data. It covers a set of published works that span over two decades, during which various imaging methods were introduced, validated, and applied to observations. Also considering that a new Fourier-based telescope, STIX, is now entering its nominal phase on-board the ESA Solar Orbiter, it became more and more apparent to the authors that it would be a good idea to put together a compendium of these imaging methods and their applications. Hence the book you are now reading.
This book constitutes the refereed post-conference proceedings of the First IFIP TC 5 International Conference on Computer Science Protecting Human Society Against Epidemics, ANTICOVID 2021, held virtually in June 2021.The 7 full and 4 short papers presented were carefully reviewed and selected from 20 submissions. The papers are concerned with a very large spectrum of problems, ranging from linguistics for automatic translation of medical terms, to a proposition for a worldwide system of fast reaction to emerging pandemic.
The material presented in this book originates from the first Eurographics Workshop on Graphics Hardware, held in Lisbon, Portugal, in August 1986. Leading experts in the field present the state of their ongoing graphics hardware projects and give their individual views of future developments. The final versions of the contributions are written in the light of in-depth discussions at the workshop. The book starts a series of EurographicSeminars volumes on the state-of-the-art of graphics hardware. This volume presents a unique collection of material which covers the following topics: workstation architectures, traffic simulators, hardware support for geometric modeling, ray-tracing. It therefore will be of interest to all computer graphics professionals wishing to gain a deeper knowledge of graphics hardware.
Offering a practical alternative to the conventional methods used in signal processing applications, this book discloses numerical techniques and explains how to evaluate the frequency-domain attributes of a waveform without resorting to actual transformation through Fourier methods. This book should prove of interest to practitioners in any field who may require the analysis, association, recognition or processing of signals, and undergraduate students of signal processing.
Markov models are extremely useful as a general, widely applicable tool for many areas in statistical pattern recognition. This unique text/reference places the formalism of Markov chain and hidden Markov models at the very center of its examination of current pattern recognition systems, demonstrating how the models can be used in a range of different applications. Thoroughly revised and expanded, this new edition now includes a more detailed treatment of the EM algorithm, a description of an efficient approximate Viterbi-training procedure, a theoretical derivation of the perplexity measure, and coverage of multi-pass decoding based on "n"-best search. Supporting the discussion of the theoretical foundations of Markov modeling, special emphasis is also placed on practical algorithmic solutions. Topics and features: introduces the formal framework for Markov models, describing hidden Markov models and Markov chain models, also known as n-gram models; covers the robust handling of probability quantities, which are omnipresent when dealing with these statistical methods; presents methods for the configuration of hidden Markov models for specific application areas, explaining the estimation of the model parameters; describes important methods for efficient processing of Markov models, and the adaptation of the models to different tasks; examines algorithms for searching within the complex solution spaces that result from the joint application of Markov chain and hidden Markov models; reviews key applications of Markov models in automatic speech recognition, character and handwriting recognition, and the analysis of biological sequences. Researchers, practitioners, and graduate students of pattern recognition will all find this book to be invaluable in aiding their understanding of the application of statistical methods in this area.
This book covers virtually all aspects of image formation in medical imaging, including systems based on ionizing radiation (x-rays, gamma rays) and non-ionizing techniques (ultrasound, optical, thermal, magnetic resonance, and magnetic particle imaging) alike. In addition, it discusses the development and application of computer-aided detection and diagnosis (CAD) systems in medical imaging. Also there will be a special track on computer-aided diagnosis on COVID-19 by CT and X-rays images. Given its coverage, the book provides both a forum and valuable resource for researchers involved in image formation, experimental methods, image performance, segmentation, pattern recognition, feature extraction, classifier design, machine learning / deep learning, radiomics, CAD workstation design, human-computer interaction, databases, and performance evaluation.
Computational geometry as an area of research in its own right emerged in the early seventies of this century. Right from the beginning, it was obvious that strong connections of various kinds exist to questions studied in the considerably older field of combinatorial geometry. For example, the combinatorial structure of a geometric problem usually decides which algorithmic method solves the problem most efficiently. Furthermore, the analysis of an algorithm often requires a great deal of combinatorial knowledge. As it turns out, however, the connection between the two research areas commonly referred to as computa tional geometry and combinatorial geometry is not as lop-sided as it appears. Indeed, the interest in computational issues in geometry gives a new and con structive direction to the combinatorial study of geometry. It is the intention of this book to demonstrate that computational and com binatorial investigations in geometry are doomed to profit from each other. To reach this goal, I designed this book to consist of three parts, acorn binatorial part, a computational part, and one that presents applications of the results of the first two parts. The choice of the topics covered in this book was guided by my attempt to describe the most fundamental algorithms in computational geometry that have an interesting combinatorial structure. In this early stage geometric transforms played an important role as they reveal connections between seemingly unrelated problems and thus help to structure the field."
A long long time ago, echoing philosophical and aesthetic principles that existed since antiquity, William of Ockham enounced the principle of parsimony, better known today as Ockham's razor: "Entities should not be multiplied without neces sity. " This principle enabled scientists to select the "best" physical laws and theories to explain the workings of the Universe and continued to guide scienti?c research, leadingtobeautifulresultsliketheminimaldescriptionlength approachtostatistical inference and the related Kolmogorov complexity approach to pattern recognition. However, notions of complexity and description length are subjective concepts anddependonthelanguage"spoken"whenpresentingideasandresults. The?eldof sparse representations, that recently underwent a Big Bang like expansion, explic itly deals with the Yin Yang interplay between the parsimony of descriptions and the "language" or "dictionary" used in them, and it became an extremely exciting area of investigation. It already yielded a rich crop of mathematically pleasing, deep and beautiful results that quickly translated into a wealth of practical engineering applications. You are holding in your hands the ?rst guide book to Sparseland, and I am sure you'll ?nd in it both familiar and new landscapes to see and admire, as well as ex cellent pointers that will help you ?nd further valuable treasures. Enjoy the journey to Sparseland! Haifa, Israel, December 2009 Alfred M. Bruckstein vii Preface This book was originally written to serve as the material for an advanced one semester (fourteen 2 hour lectures) graduate course for engineering students at the Technion, Israel.
This book covers a large set of methods in the field of Artificial Intelligence - Deep Learning applied to real-world problems. The fundamentals of the Deep Learning approach and different types of Deep Neural Networks (DNNs) are first summarized in this book, which offers a comprehensive preamble for further problem-oriented chapters. The most interesting and open problems of machine learning in the framework of Deep Learning are discussed in this book and solutions are proposed. This book illustrates how to implement the zero-shot learning with Deep Neural Network Classifiers, which require a large amount of training data. The lack of annotated training data naturally pushes the researchers to implement low supervision algorithms. Metric learning is a long-term research but in the framework of Deep Learning approaches, it gets freshness and originality. Fine-grained classification with a low inter-class variability is a difficult problem for any classification tasks. This book presents how it is solved, by using different modalities and attention mechanisms in 3D convolutional networks. Researchers focused on Machine Learning, Deep learning, Multimedia and Computer Vision will want to buy this book. Advanced level students studying computer science within these topic areas will also find this book useful.
This volume gathers papers presented at the Workshop on Computational Diffusion MRI (CDMRI 2019), held under the auspices of the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), which took place in Shenzhen, China on October 17, 2019. This book presents the latest advances in the rapidly expanding field of diffusion MRI. It shares new perspectives on the latest research challenges for those currently working in the field, but also offers a valuable starting point for anyone interested in learning about computational techniques in diffusion MRI. The book includes rigorous mathematical derivations, a wealth of rich, full-colour visualisations and extensive clinically relevant results. As such, it will be of interest to researchers and practitioners in the fields of computer science, MRI physics and applied mathematics. Readers will find contributions covering a broad range of topics, from the mathematical foundations of the diffusion process and signal generation, to new computational methods and estimation techniques for the in vivo recovery of microstructural and connectivity features, as well as diffusion-relaxometry and frontline applications in research and clinical practice. This edition includes invited works from high-profile researchers with a specific focus on three new and important topics that are gaining momentum within the diffusion MRI community, including diffusion MRI signal acquisition and processing strategies, machine learning for diffusion MRI, and diffusion MRI outside the brain and clinical applications.
This book provides the tools to enhance the precision, automation and intelligence of modern CNC machining systems. Based on a detailed description of the technical foundations of the machining monitoring system, it develops the general idea of design and implementation of smart machining monitoring systems, focusing on the tool condition monitoring system. The book is structured in two parts. Part I discusses the fundamentals of machining systems, including modeling of machining processes, mathematical basics of condition monitoring and the framework of TCM from a machine learning perspective. Part II is then focused on the applications of these theories. It explains sensory signal processing and feature extraction, as well as the cyber-physical system of the smart machining system. Its utilisation of numerous illustrations and diagrams explain the ideas presented in a clear way, making this book a valuable reference for researchers, graduate students and engineers alike.
MPEG-4 is the multimedia standard for combining interactivity, natural and synthetic digital video, audio and computer-graphics. Typical applications are: internet, video conferencing, mobile videophones, multimedia cooperative work, teleteaching and games. With MPEG-4 the next step from block-based video (ISO/IEC MPEG-1, MPEG-2, CCITT H.261, ITU-T H.263) to arbitrarily-shaped visual objects is taken. This significant step demands a new methodology for system analysis and design to meet the considerably higher flexibility of MPEG-4. Motion estimation is a central part of MPEG-1/2/4 and H.261/H.263 video compression standards and has attracted much attention in research and industry, for the following reasons: it is computationally the most demanding algorithm of a video encoder (about 60-80% of the total computation time), it has a high impact on the visual quality of a video encoder, and it is not standardized, thus being open to competition. Algorithms, Complexity Analysis, and VLSI Architectures for MPEG-4 Motion Estimation covers in detail every single step in the design of a MPEG-1/2/4 or H.261/H.263 compliant video encoder: Fast motion estimation algorithms Complexity analysis tools Detailed complexity analysis of a software implementation of MPEG-4 video Complexity and visual quality analysis of fast motion estimation algorithms within MPEG-4 Design space on motion estimation VLSI architectures Detailed VLSI design examples of (1) a high throughput and (2) a low-power MPEG-4 motion estimator. Algorithms, Complexity Analysis and VLSI Architectures for MPEG-4 Motion Estimation is an important introduction to numerous algorithmic, architectural and system design aspects of the multimedia standard MPEG-4. As such, all researchers, students and practitioners working in image processing, video coding or system and VLSI design will find this book of interest.
This book covers virtually all aspects of image formation in medical imaging, including systems based on ionizing radiation (x-rays, gamma rays) and non-ionizing techniques (ultrasound, optical, thermal, magnetic resonance, and magnetic particle imaging) alike. In addition, it discusses the development and application of computer-aided detection and diagnosis (CAD) systems in medical imaging. Given its coverage, the book provides both a forum and valuable resource for researchers involved in image formation, experimental methods, image performance, segmentation, pattern recognition, feature extraction, classifier design, machine learning / deep learning, radiomics, CAD workstation design, human-computer interaction, databases, and performance evaluation.
This book is a collection of selected papers presented at the First Congress on Intelligent Systems (CIS 2020), held in New Delhi, India, during September 5-6, 2020. It includes novel and innovative work from experts, practitioners, scientists, and decision-makers from academia and industry. It covers selected papers in the area of computer vision. This book covers new tools and technologies in some of the important areas of medical science like histopathological image analysis, cancer taxonomy, use of deep learning architecture in dental care, and many more. Furthermore, this book reviews and discusses the use of intelligent learning-based algorithms for increasing the productivity in agricultural domain.
This book introduces the point cloud; its applications in industry, and the most frequently used datasets. It mainly focuses on three computer vision tasks -- point cloud classification, segmentation, and registration -- which are fundamental to any point cloud-based system. An overview of traditional point cloud processing methods helps readers build background knowledge quickly, while the deep learning on point clouds methods include comprehensive analysis of the breakthroughs from the past few years. Brand-new explainable machine learning methods for point cloud learning, which are lightweight and easy to train, are then thoroughly introduced. Quantitative and qualitative performance evaluations are provided. The comparison and analysis between the three types of methods are given to help readers have a deeper understanding. With the rich deep learning literature in 2D vision, a natural inclination for 3D vision researchers is to develop deep learning methods for point cloud processing. Deep learning on point clouds has gained popularity since 2017, and the number of conference papers in this area continue to increase. Unlike 2D images, point clouds do not have a specific order, which makes point cloud processing by deep learning quite challenging. In addition, due to the geometric nature of point clouds, traditional methods are still widely used in industry. Therefore, this book aims to make readers familiar with this area by providing comprehensive overview of the traditional methods and the state-of-the-art deep learning methods. A major portion of this book focuses on explainable machine learning as a different approach to deep learning. The explainable machine learning methods offer a series of advantages over traditional methods and deep learning methods. This is a main highlight and novelty of the book. By tackling three research tasks -- 3D object recognition, segmentation, and registration using our methodology -- readers will have a sense of how to solve problems in a different way and can apply the frameworks to other 3D computer vision tasks, thus give them inspiration for their own future research. Numerous experiments, analysis and comparisons on three 3D computer vision tasks (object recognition, segmentation, detection and registration) are provided so that readers can learn how to solve difficult Computer Vision problems.
This is the third edition of the first ever book to explore the exciting field of augmented reality art and its enabling technologies. The new edition has been thoroughly revised and updated, with 9 new chapters included. As well as investigating augmented reality as a novel artistic medium, the book covers cultural, social, spatial and cognitive facets of augmented reality art. It has been written by a virtual team of 33 researchers and artists from 11 countries who are pioneering in the new form of art, and contains numerous colour illustrations showing both classic and recent augmented reality artworks. Intended as a starting point for exploring this new fascinating area of research and creative practice, it will be essential reading not only for artists, researchers and technology developers, but also for students (graduates and undergraduates) and all those interested in emerging augmented reality technology and its current and future applications in art.
The goal of this volume is to summarize the state-of-the-art in the utilization of computer vision techniques in the diagnosis of skin cancer. Malignant melanoma is one of the most rapidly increasing cancers in the world. Early diagnosis is particularly important since melanoma can be cured with a simple excision if detected early. In recent years, dermoscopy has proved valuable in visualizing the morphological structures in pigmented lesions. However, it has also been shown that dermoscopy is difficult to learn and subjective. Newer technologies such as infrared imaging, multispectral imaging, and confocal microscopy, have recently come to the forefront in providing greater diagnostic accuracy. These imaging technologies presented in this book can serve as an adjunct to physicians and provide automated skin cancer screening. Although computerized techniques cannot as yet provide a definitive diagnosis, they can be used to improve biopsy decision-making as well as early melanoma detection, especially for patients with multiple atypical nevi. |
You may like...
Beyond Punishment - A New View on the…
Edgardo Rotman
Hardcover
Evolving Ourselves - How Unnatural…
Juan Enriquez, Steve Gullans
Paperback
(1)R801 Discovery Miles 8 010
Integrated Population Biology and…
Arni S.R. Srinivasa Rao, C.R. Rao
Hardcover
R6,219
Discovery Miles 62 190
Handbook on Pretrial Justice
Christine S. Scott-Hayward, Jennifer E. Copp, …
Hardcover
R6,644
Discovery Miles 66 440
|