![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Image processing > General
Image processing algorithms based on the mammalian visual cortex are powerful tools for extraction information and manipulating images. This book reviews the neural theory and translates them into digital models. Applications are given in areas of image recognition, foveation, image fusion and information extraction. The third edition reflects renewed international interest in pulse image processing with updated sections presenting several newly developed applications. This edition also introduces a suite of Python scripts that assist readers in replicating results presented in the text and to further develop their own applications.
This book presents an introduction to new and important research in the images processing and analysis area. It is hoped that this book will be useful for scientists and students involved in many aspects of image analysis. The book does not attempt to cover all of the aspects of Computer Vision, but the chapters do present some state of the art examples.
Pattern recognition basically deals with the recognition of patterns, shapes, objects, things in images. Document image analysis was one of the very ?rst applications of pattern recognition and even of computing. But until the 1980s, research in this ?eld was mainly dealing with text-based documents, including OCR (Optical Character Recognition) and page layout analysis. Only a few people were looking at more speci?c documents such as music sheet, bank cheques or forms. The community of graphics recognition became visible in the late 1980s. Their speci?c interest was to recognize high-level objects represented by line drawings and graphics. The speci?c pattern recognition problems they had to deal with was raster-to-graphics conversion (i.e., recognizing graphical primitives in a cluttered pixel image), text-graphics separation, and symbol recognition. The speci?c problem of symbol recognition in graphical documents has received a lot of attention. The symbols to be recognized can be musical notation, electrical symbols, architectural objects, pictograms in maps, etc. At ?rst glance, the symbol recognition problems seems to be very similar to that of character recognition; - ter all, characters are basically a subset of symbols. Therefore, the large know-how in OCR has been extensively used in graphical symbol recognition: starting with segmenting the document to extract the symbols, extracting features from the s- bols, and then recognizing them through classi?cation or matching, with respect to a training/learning set.
The problem of robotic and virtual interaction with physical objects has been the subject of research for many years in both the robotic manipulation and haptics communities. Both communities have focused much attention on human touch-based perception and manipulation, modelling contact between real or virtual hands and objects, or mechanism design. However, as a whole, these problems have not yet been addressed from a unified perspective. This edited book is the outcome of a well-attended workshop which brought together leading scholars from various branches of the robotics, virtual-reality, and human studies communities during the 2004 IEEE International Conference on Robotics and Automation. It covers some of the most challenging problems on the forefront of today's research on physical interaction with real and virtual objects, with special emphasis on modelling contacts between objects, grasp planning algorithms, haptic perception, and advanced design of hands, devices and interfaces.
The sampling lattice used to digitize continuous image data is a signi?cant determinant of the quality of the resulting digital image, and therefore, of the e?cacy of its processing. The nature of sampling lattices is intimately tied to the tessellations of the underlying continuous image plane. To allow uniform sampling of arbitrary size images, the lattice needs to correspond to a regular - spatially repeatable - tessellation. Although drawings and paintings from many ancient civilisations made ample use of regular triangular, square and hexagonal tessellations, and Euler later proved that these three are indeed the only three regular planar tessellations possible, sampling along only the square lattice has found use in forming digital images. The reasons for these are varied, including extensibility to higher dimensions, but the literature on the rami?cations of this commitment to the square lattice for the dominant case of planar data is relatively limited. There seems to be neither a book nor a survey paper on the subject of alternatives. This book on hexagonal image processing is therefore quite appropriate. Lee Middleton and Jayanthi Sivaswamy well motivate the need for a c- certedstudyofhexagonallatticeandimageprocessingintermsoftheirknown uses in biological systems, as well as computational and other theoretical and practicaladvantagesthataccruefromthisapproach. Theypresentthestateof the art of hexagonal image processing and a comparative study of processing images sampled using hexagonal and square grids.
21 years ago it was a joint idea with Hans Rottenkolber to organize a workshop dedicated to the discussion of the latest results in the automatic processing of fringe patterns. This idea was promoted by the insight that automatic and high precision phase measurement techniques will play a key role in all future industrial and scientific applications of optical metrology. A couple of months later more than 50 specialists from East and West met in East Berlin, the capital of the former GDR, to spend 3 days with the discussion of new principles of fringe processing. In the stimulating atmoshere the idea was born to repeat the workshop and to organize the meeting in an olympic schedule. And thus meanwhile 20 years have been passed and we have today Fringe number six. However, such a workshop takes place in a dynamic environment. Therefore the main topics of the previous events were always adapted to the most interesting subjects of the new period. In 1993 the workshop took place in Bremen and was dedicated to new principles of optical shape measurement, setup calibration, phase unwrapping and nondestructive testing, while in 1997 new approaches in multi-sensor metrology, active measurement strategies and hybrid processing technologies played a central role. 2001, the first meeting in the 21st century, was focused to optical methods for micromeasurements, hybrid measurement technologies and new sensor solutions for industrial inspection.
This, the 26th issue of the Transactions on Computational Science journal, is comprised of ten extended versions of selected papers from the International Conference on Cyberworlds 2014, held in Santander, Spain, in June 2014. The topics covered include areas of virtual reality, games, social networks, haptic modeling, cybersecurity, and applications in education and arts.
This volume consists of a number of selected papers that were presented at the 9th International Conference on Knowledge, Information and Creativity Support Systems (KICSS 2014) in Limassol, Cyprus, after they were substantially revised and extended. The 26 regular papers and 19 short papers included in this proceedings cover all aspects of knowledge management, knowledge engineering, intelligent information systems, and creativity in an information technology context, including computational creativity and its cognitive and collaborative aspects.
The two-volume set CCIS 483 and CCIS 484 constitutes the refereed proceedings of the 6th Chinese Conference on Pattern Recognition, CCPR 2014, held in Changsha, China, in November 2014. The 112 revised full papers presented in two volumes were carefully reviewed and selected from 225 submissions. The papers are organized in topical sections on fundamentals of pattern recognition; feature extraction and classification; computer vision; image processing and analysis; video processing and analysis; biometric and action recognition; biomedical image analysis; document and speech analysis; pattern recognition applications.
This text reviews the evolution of the field of visualization, providing innovative examples from various disciplines, highlighting the important role that visualization plays in extracting and organizing the concepts found in complex data. Features: presents a thorough introduction to the discipline of knowledge visualization, its current state of affairs and possible future developments; examines how tables have been used for information visualization in historical textual documents; discusses the application of visualization techniques for knowledge transfer in business relationships, and for the linguistic exploration and analysis of sensory descriptions; investigates the use of visualization to understand orchestral music scores, the optical theory behind Renaissance art, and to assist in the reconstruction of an historic church; describes immersive 360 degree stereographic visualization, knowledge-embedded embodied interaction, and a novel methodology for the analysis of architectural forms.
3D Surface Reconstruction: Multi-Scale Hierarchical Approaches presents methods to model 3D objects in an incremental way so as to capture more finer details at each step. The configuration of the model parameters, the rationale and solutions are described and discussed in detail so the reader has a strong understanding of the methodology. Modeling starts from data captured by 3D digitizers and makes the process even more clear and engaging. Innovative approaches, based on two popular machine learning paradigms, namely Radial Basis Functions and the Support Vector Machines, are also introduced. These paradigms are innovatively extended to a multi-scale incremental structure, based on a hierarchical scheme. The resulting approaches allow readers to achieve high accuracy with limited computational complexity, and makes the approaches appropriate for online, real-time operation. Applications can be found in any domain in which regression is required. 3D Surface Reconstruction: Multi-Scale Hierarchical Approaches is designed as a secondary text book or reference for advanced-level students and researchers in computer science. This book also targets practitioners working in computer vision or machine learning related fields.
Welcome to the Second International IFIP Entertainment Computing Symposium on st Cultural Computing (ECS 2010), which was part of the 21 IFIP World Computer Congress, held in Brisbane, Australia during September 21-23, 2010. On behalf of the people who made this conference happen, we wish to welcome you to this inter- tional event. The IFIP World Computer Congress has offered an opportunity for researchers and practitioners to present their findings and research results in several prominent areas of computer science and engineering. In the last World Computer Congress, WCC 2008, held in Milan, Italy in September 2008, IFIP launched a new initiative focused on all the relevant issues concerning computing and entertainment. As a - sult, the two-day technical program of the First Entertainment Computing Symposium (ECS 2008) provided a forum to address, explore and exchange information on the state of the art of computer-based entertainment and allied technologies, their design and use, and their impact on society. Based on the success of ECS 2008, at this Second IFIP Entertainment Computing Symposium (ECS 2010), our challenge was to focus on a new area in entertainment computing: cultural computing.
The book is about all aspects of computing, communication, general sciences and educational research covered at the Second International Conference on Computer & Communication Technologies held during 24-26 July 2015 at Hyderabad. It hosted by CMR Technical Campus in association with Division - V (Education & Research) CSI, India. After a rigorous review only quality papers are selected and included in this book. The entire book is divided into three volumes. Three volumes cover a variety of topics which include medical imaging, networks, data mining, intelligent computing, software design, image processing, mobile computing, digital signals and speech processing, video surveillance and processing, web mining, wireless sensor networks, circuit analysis, fuzzy systems, antenna and communication systems, biomedical signal processing and applications, cloud computing, embedded systems applications and cyber security and digital forensic. The readers of these volumes will be highly benefited from the technical contents of the topics.
Soft Computing Approach to Pattern Classification and Object Recognition establishes an innovative, unified approach to supervised pattern classification and model-based occluded object recognition. The book also surveys various soft computing tools, fuzzy relational calculus (FRC), genetic algorithm (GA) and multilayer perceptron (MLP) to provide a strong foundation for the reader. The supervised approach to pattern classification and model-based approach to occluded object recognition are treated in one framework , one based on either a conventional interpretation or a new interpretation of multidimensional fuzzy implication (MFI) and a novel notion of fuzzy pattern vector (FPV). By combining practice and theory, a completely independent design methodology was developed in conjunction with this supervised approach on a unified framework, and then tested thoroughly against both synthetic and real-life data. In the field of soft computing, such an application-oriented design study is unique in nature. The monograph essentially mimics the cognitive process of human decision making, and carries a message of perceptual integrity in representational diversity. Soft Computing Approach to Pattern Classification and Object Recognition is intended for researchers in the area of pattern classification and computer vision. Other academics and practitioners will also find the book valuable.
"Foundations of Large-Scale Multimedia Information Management and Retrieval: Mathematics of Perception" covers knowledge representation and semantic analysis of multimedia data and scalability in signal extraction, data mining, and indexing. The book is divided into two parts: Part I - Knowledge Representation and Semantic Analysis focuses on the key components of mathematics of perception as it applies to data management and retrieval. These include feature selection/reduction, knowledge representation, semantic analysis, distance function formulation for measuring similarity, and multimodal fusion. Part II - Scalability Issues presents indexing and distributed methods for scaling up these components for high-dimensional data and Web-scale datasets. The book presents some real-world applications and remarks on future research and development directions. The book is designed for researchers, graduate students, and practitioners in the fields of Computer Vision, Machine Learning, Large-scale Data Mining, Database, and Multimedia Information Retrieval. Dr. Edward Y. Chang was a professor at the Department of Electrical & Computer Engineering, University of California at Santa Barbara, before he joined Google as a research director in 2006. Dr. Chang received his M.S. degree in Computer Science and Ph.D degree in Electrical Engineering, both from Stanford University.
Biometrics and Kansei Engineering is the first book to bring together the principles and applications of each discipline. The future of biometrics is in need of new technologies that can depend on people's emotions and the prediction of their intention to take an action. Behavioral biometrics studies the way people walk, talk, and express their emotions, and Kansei Engineering focuses on interactions between users, products/services and product psychology. They are becoming quite complementary. This book also introduces biometric applications in our environment, which further illustrates the close relationship between Biometrics and Kansei Engineering. Examples and case studies are provided throughout this book. Biometrics and Kansei Engineering is designed as a reference book for professionals working in these related fields. Advanced-level students and researchers studying computer science and engineering will find this book useful as a reference or secondary text book as well.
The three volume set LNCS 8834, LNCS 8835, and LNCS 8836 constitutes the proceedings of the 21st International Conference on Neural Information Processing, ICONIP 2014, held in Kuching, Malaysia, in November 2014. The 231 full papers presented were carefully reviewed and selected from 375 submissions. The selected papers cover major topics of theoretical research, empirical study, and applications of neural information processing research. The 3 volumes represent topical sections containing articles on cognitive science, neural networks and learning systems, theory and design, applications, kernel and statistical methods, evolutionary computation and hybrid intelligent systems, signal and image processing, and special sessions intelligent systems for supporting decision, making processes, theories and applications, cognitive robotics, and learning systems for social network and web mining.
The two volume set LNCS 8887 and 8888 constitutes the refereed proceedings of the 10th International Symposium on Visual Computing, ISVC 2014, held in Las Vegas, NV, USA. The 74 revised full papers and 55 poster papers presented together with 39 special track papers were carefully reviewed and selected from more than 280 submissions. The papers are organized in topical sections: Part I (LNCS 8887) comprises computational bioimaging, computer graphics; motion, tracking, feature extraction and matching, segmentation, visualization, mapping, modeling and surface reconstruction, unmanned autonomous systems, medical imaging, tracking for human activity monitoring, intelligent transportation systems, visual perception and robotic systems. Part II (LNCS 8888) comprises topics such as computational bioimaging , recognition, computer vision, applications, face processing and recognition, virtual reality, and the poster sessions.
The book is about all aspects of computing, communication, general sciences and educational research covered at the Second International Conference on Computer & Communication Technologies held during 24-26 July 2015 at Hyderabad. It hosted by CMR Technical Campus in association with Division - V (Education & Research) CSI, India. After a rigorous review only quality papers are selected and included in this book. The entire book is divided into three volumes. Three volumes cover a variety of topics which include medical imaging, networks, data mining, intelligent computing, software design, image processing, mobile computing, digital signals and speech processing, video surveillance and processing, web mining, wireless sensor networks, circuit analysis, fuzzy systems, antenna and communication systems, biomedical signal processing and applications, cloud computing, embedded systems applications and cyber security and digital forensic. The readers of these volumes will be highly benefited from the technical contents of the topics.
Geometric Modeling and Algebraic Geometry, though closely related, are traditionally represented by two almost disjoint scientific communities. Both fields deal with objects defined by algebraic equations, but the objects are studied in different ways. In 12 chapters written by leading experts, this book presents recent results which rely on the interaction of both fields. Some of these results have been obtained from a major European project in geometric modeling.
Throughout much of machine vision's early years the infrared imagery has suffered from return on investment despite its advantages over visual counterparts. Recently, the ?scal momentum has switched in favor of both manufacturers and practitioners of infrared technology as a result of today's rising security and safety challenges and advances in thermographic sensors and their continuous drop in costs. This yielded a great impetus in achieving ever better performance in remote surveillance, object recognition, guidance, noncontact medical measurements, and more. The purpose of this book is to draw attention to recent successful efforts made on merging computer vision applications (nonmilitary only) and nonvisual imagery, as well as to ?ll in the need in the literature for an up-to-date convenient reference on machine vision and infrared technologies. Augmented Perception in Infrared provides a comprehensive review of recent deployment of infrared sensors in modern applications of computer vision, along with in-depth description of the world's best machine vision algorithms and intel- gent analytics. Its topics encompass many disciplines of machine vision, including remote sensing, automatic target detection and recognition, background modeling and image segmentation, object tracking, face and facial expression recognition, - variant shape characterization, disparate sensors fusion, noncontact physiological measurements, night vision, and target classi?cation. Its application scope includes homeland security, public transportation, surveillance, medical, and military. Mo- over, this book emphasizes the merging of the aforementioned machine perception applications and nonvisual imaging in intensi?ed, near infrared, thermal infrared, laser, polarimetric, and hyperspectral bands.
This book constitutes the refereed proceedings of the 36th German Conference on Pattern Recognition, GCPR 2014, held in Munster, Germany, in September 2014. The 58 revised full papers and 8 short papers were carefully reviewed and selected from 153 submissions. The papers are organized in topical sections on variational models for depth and flow, reconstruction, bio-informatics, deep learning and segmentation, feature computation, video interpretation, segmentation and labeling, image processing and analysis, human pose and people tracking, interpolation and inpainting.
This unique book explores the important issues in studying for active visual perception. The book's eleven chapters draw on recent important work in robot vision over ten years, particularly in the use of new concepts. Implementation examples are provided with theoretical methods for testing in a real robot system. With these optimal sensor planning strategies, this book will give the robot vision system the adaptability needed in many practical applications.
David Stevens Space-based information, which includes earth observation data, is increasingly becoming an integral part of our lives. We have been relying for decades on data obtained from meteorological satellites for updates on the weather and to monitor weather-related natural disasters such as hurricanes. We now count on our personal satellite-based navigation systems to guide us to the nearest Starbucks Coffee and use web-based applications such as Google Earth and Microsoft Virtual Earth to study the area of places we will or would like to visit. At the same time, satellite-based technologies have experienced impressive growth in recent years with an increase in the number of available sensors, an increase in spatial, temporal and spectral resolutions, an increase in the availability of radar satellites such as Terrasar-X and ALOS, and the launching of specific constellations such as the Disaster Monitoring Constellation (DMC), COSMO- SkyMed (COnstellation of small Satellites for the Mediterranean basin Observation) and RapidEye. Even more recent are the initiatives being set-up to ensure that space-based information is being accessed and used by decision makers, such as Sentinel Asia for the Asia and Pacific region and SERVIR for the Latin America and Caribbean region.
Implicit objects have gained increasing importance in geometric modeling, visualisation, animation, and computer graphics, because their geometric properties provide a good alternative to traditional parametric objects. This book presents the mathematics, computational methods and data structures, as well as the algorithms needed to render implicit curves and surfaces, and shows how implicit objects can easily describe smooth, intricate, and articulatable shapes, and hence why they are being increasingly used in graphical applications. Divided into two parts, the first introduces the mathematics of implicit curves and surfaces, as well as the data structures suited to store their sampled or discrete approximations, and the second deals with different computational methods for sampling implicit curves and surfaces, with particular reference to how these are applied to functions in 2D and 3D spaces. |
You may like...
Countdown 1960 - The Behind-The-Scenes…
Chris Wallace, Mitch Weiss
Hardcover
Statistics For Business And Economics
David Anderson, James Cochran, …
Paperback
(1)
Flow Past Highly Compliant Boundaries…
Peter W. Carpenter, Timothy J. Pedley
Hardcover
R4,190
Discovery Miles 41 900
|