Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Applications of computing > Artificial intelligence > Computer vision
This book constitutes the thoroughly reviewed post-proceeding of International Workshops on Coordination, Organization, Institutions and Norms in Agent Systems, COIN@AAMAS 2012, held in Valencia, Spain in June 2012. The 13 revised full papers presented together with 1 invited talk went through several rounds of reviewing and revision and were carefully selected for presentations. The papers are organized in topical sections on compliance and enforcement, norm emergence and social strategies, refinement, contextualisation and adaptation.
This book features research papers presented at the International Conference on Emerging Technologies in Data Mining and Information Security (IEMIS 2020) held at the University of Engineering & Management, Kolkata, India, during July 2020. The book is organized in three volumes and includes high-quality research work by academicians and industrial experts in the field of computing and communication, including full-length papers, research-in-progress papers, and case studies related to all the areas of data mining, machine learning, Internet of things (IoT), and information security.
One of the most intriguing questions in image processing is the problem of recovering the desired or perfect image from a degraded version. In many instances one has the feeling that the degradations in the image are such that relevant information is close to being recognizable, if only the image could be sharpened just a little. This monograph discusses the two essential steps by which this can be achieved, namely the topics of image identification and restoration. More specifically the goal of image identifi cation is to estimate the properties of the imperfect imaging system (blur) from the observed degraded image, together with some (statistical) char acteristics of the noise and the original (uncorrupted) image. On the basis of these properties the image restoration process computes an estimate of the original image. Although there are many textbooks addressing the image identification and restoration problem in a general image processing setting, there are hardly any texts which give an indepth treatment of the state-of-the-art in this field. This monograph discusses iterative procedures for identifying and restoring images which have been degraded by a linear spatially invari ant blur and additive white observation noise. As opposed to non-iterative methods, iterative schemes are able to solve the image restoration problem when formulated as a constrained and spatially variant optimization prob In this way restoration results can be obtained which outperform the lem. results of conventional restoration filters."
Although synthetic environments were traditionally used in military settings for mission rehearsal and simulations, their use is rapidly spreading to a variety of applications in the commercial, research and industrial sectors, such as flight training for commercial aircraft, city planning, car safety research in real-time traffic simulations, and video games. 3D Synthetic Environment Reconstruction contains seven invited chapters from leading experts in the field, bringing together a coherent body of recent knowledge relating 3D geospatial data collection, design issues, and techniques used in synthetic environments design, implementation and interoperability. In particular, this book describes new techniques for the generation of Synthetic Environments with increased resolution and rich attribution, both essential for accurate modeling and simulation. This book also deals with interoperability of models and simulations, which is necessary for facilitating the reuse of modeling and simulation components. 3D Synthetic Environment Reconstruction is an excellent reference for researchers and practitioners in the field.
The book deals with the development of a methodology to estimate the motion field between two frames for video coding applications. This book proposes an exhaustive study of the motion estimation process in the framework of a general video coder. The conceptual explanations are discussed in a simple language and with the use of suitable figures. The book will serve as a guide for new researchers working in the field of motion estimation techniques.
This volume constitutes the refereed proceedings of the 6th International Conference on Multimedia Communications, Services and Security, MCSS 2013, held in Krakow, Poland, in June 2013. The 27 full papers included in the volume were selected from numerous submissions. The papers cover various topics related to multimedia technology and its application to public safety problems.
Bayesian Approach to Image Interpretation will interest anyone working in image interpretation. It is complete in itself and includes background material. This makes it useful for a novice as well as for an expert. It reviews some of the existing probabilistic methods for image interpretation and presents some new results. Additionally, there is extensive bibliography covering references in varied areas. For a researcher in this field, the material on synergistic integration of segmentation and interpretation modules and the Bayesian approach to image interpretation will be beneficial. For a practicing engineer, the procedure for generating knowledge base, selecting initial temperature for the simulated annealing algorithm, and some implementation issues will be valuable. New ideas introduced in the book include: New approach to image interpretation using synergism between the segmentation and the interpretation modules. A new segmentation algorithm based on multiresolution analysis. Novel use of the Bayesian networks (causal networks) for image interpretation. Emphasis on making the interpretation approach less dependent on the knowledge base and hence more reliable by modeling the knowledge base in a probabilistic framework. Useful in both the academic and industrial research worlds, Bayesian Approach to Image Interpretation may also be used as a textbook for a semester course in computer vision or pattern recognition.
Current research in Visual Database Systems can be characterized by scalability, multi-modality of interaction, and higher semantic levels of data. Visual interfaces that allow users to interact with large databases must scale to web and distributed applications. Interaction with databases must employ multiple and more diversified interaction modalities, such as speech and gesture, in addition to visual exploitation. Finally, the basic elements managed in modern databases are rapidly evolving, from text, images, sound, and video, to compositions and now annotations of these media, thus incorporating ever-higher levels and different facets of semantics. In addition to visual interfaces and multimedia databases, Visual and Multimedia Information Management includes research in the following areas: * Speech and aural interfaces to databases; * Visualization of web applications and database structure; * Annotation and retrieval of image databases; * Visual querying in geographical information systems; * Video databases; and * Virtual environment and modeling of complex shapes.Visual and Multimedia Information Management comprises the proceedings of the sixth International Conference on Visual Database Systems, which was sponsored by the International Federation for Information Processing (IFIP), and held in Brisbane, Australia, in May 2002. This volume will be essential for researchers in the field of management of visual and multimedia information, as well as for industrial practitioners concerned with building IT products for managing visual and multimedia information.
This book is an introduction to the fundamental concepts and tools needed for solving problems of a geometric nature using a computer. It attempts to fill the gap between standard geometry books, which are primarily theoretical, and applied books on computer graphics, computer vision, robotics, or machine learning. This book covers the following topics: affine geometry, projective geometry, Euclidean geometry, convex sets, SVD and principal component analysis, manifolds and Lie groups, quadratic optimization, basics of differential geometry, and a glimpse of computational geometry (Voronoi diagrams and Delaunay triangulations). Some practical applications of the concepts presented in this book include computer vision, more specifically contour grouping, motion interpolation, and robot kinematics. In this extensively updated second edition, more material on convex sets, Farkas's lemma, quadratic optimization and the Schur complement have been added. The chapter on SVD has been greatly expanded and now includes a presentation of PCA. The book is well illustrated and has chapter summaries and a large number of exercises throughout. It will be of interest to a wide audience including computer scientists, mathematicians, and engineers. Reviews of first edition: "Gallier's book will be a useful source for anyone interested in applications of geometrical methods to solve problems that arise in various branches of engineering. It may help to develop the sophisticated concepts from the more advanced parts of geometry into useful tools for applications." (Mathematical Reviews, 2001) "...it will be useful as a reference book for postgraduates wishing to find the connection between their current problem and the underlying geometry." (The Australian Mathematical Society, 2001)
This book constitutes the thoroughly refereed post-conference proceedings of the 5th International Joint Conference on Biomedical Engineering Systems and Technologies, BIOSTEC 2012, held in Vilamoura, Portugal, in February 2012. The 26 revised full papers presented together with one invited lecture were carefully reviewed and selected from a total of 522 submissions. The papers cover a wide range of topics and are organized in four general topical sections on biomedical electronics and devices; bioinformatics models, methods and algorithms; bio-inspired systems and signal processing; health informatics.
This book constitutes the refereed proceedings of the Chinese Conference on Image and Graphics Technologies and Applications, IGTA 2013, held in Beijing, China, in April 2013. The 40 papers and posters presented were carefully reviewed and selected from 89 submissions. The papers address issues such as the generation of new ideas, new approaches, new techniques, new applications and new evaluation in the field of image processing and graphics.
This book constitutes the thoroughly refereed proceedings of the 2012 ICSOC Workshops consisting of 6 scientific satellite events, organized in 3 main tracks including workshop track (ASC, DISA. PAASC, SCEB, SeMaPS and WESOA 2012), PhD symposium track, demonstration track; held in conjunction with the 10th International Conference on Service-Oriented Computing (ICSOC), in Shanghai, China, November 2012. The 53 revised papers presents a wide range of topics that fall into the general area of service computing such as business process management, distributed systems, computer networks, wireless and mobile computing, grid computing, networking, service science, management science, and software engineering.
This book constitutes the thoroughly refereed post-conference proceedings of the 25th International Workshop on Languages and Compilers for Parallel Computing, LCPC 2012, held in Tokyo, Japan, in September 2012. The 16 revised full papers, 5 poster papers presented with 1 invited talk were carefully reviewed and selected from 39 submissions. The focus of the papers is on following topics: compiling for parallelism, automatic parallelization, optimization of parallel programs, formal analysis and verification of parallel programs, parallel runtime systems, task-parallel libraries, parallel application frameworks, performance analysis tools, debugging tools for parallel programs, parallel algorithms and applications.
Biometrics: Personal Identification in Networked Society is a comprehensive and accessible source of state-of-the-art information on all existing and emerging biometrics: the science of automatically identifying individuals based on their physiological or behavior characteristics. In particular, the book covers: *General principles and ideas of designing biometric-based systems and their underlying tradeoffs *Identification of important issues in the evaluation of biometrics-based systems *Integration of biometric cues, and the integration of biometrics with other existing technologies *Assessment of the capabilities and limitations of different biometrics *The comprehensive examination of biometric methods in commercial use and in research development *Exploration of some of the numerous privacy and security implications of biometrics. Also included are chapters on face and eye identification, speaker recognition, networking, and other timely technology-related issues. All chapters are written by leading internationally recognized experts from academia and industry. Biometrics: Personal Identification in Networked Society is an invaluable work for scientists, engineers, application developers, systems integrators, and others working in biometrics.
This Festschrift volume, published in memory of Harald Ganzinger, contains 17 papers from colleagues all over the world and covers all the fields to which Harald Ganzinger dedicated his work during his academic career. The volume begins with a complete account of Harald Ganzinger's work and then turns its focus to the research of his former colleagues, students, and friends who pay tribute to him through their writing. Their individual papers span a broad range of topics, including programming language semantics, analysis and verification, first-order and higher-order theorem proving, unification theory, non-classical logics, reasoning modulo theories, and applications of automated reasoning in biology.
This book constitutes the carefully refereed and revised selected papers of the 5th Canada-France ETS Symposium on Foundations and Practice of Security, FPS 2012, held in Montreal, QC, Canada, in October 2012. The book contains a revised version of 21 full papers, accompanied by 3 short papers. The papers were carefully reviewed and selected from 62 submissions. The papers are organized in topical section on cryptography and information theory, key management and cryptographic protocols, privacy and trust, policies and applications security, and network and adaptive security.
The two-volume set LNAI 7629 and LNAI 7630 constitutes the refereed proceedings of the 11th Mexican International Conference on Artificial Intelligence, MICAI 2012, held in San Luis Potosi, Mexico, in October/November 2012. The 80 revised papers presented were carefully reviewed and selected from 224 submissions. The second volume includes 40 papers focusing on soft computing. The papers are organized in the following topical sections: natural language processing; evolutionary and nature-inspired metaheuristic algorithms; neural networks and hybrid intelligent systems; fuzzy systems and probabilistic models in decision making.
This book constitutes the thoroughly refereed proceedings of the 17th International Conference on Discrete Geometry for Computer Imagery, DGCI 2013, held in Seville, Spain, in March 2013. The 34 revised full papers presented were carefully selected from 56 submissions and focus on geometric transforms, discrete and combinatorial tools for image segmentation and analysis, discrete and combinatorial topology, discrete shape representation, recognition and analysis, models for discrete geometry, morphological analysis and discrete tomography.
Deformable avatars are virtual humans that deform themselves during motion. This implies facial deformations, body deformations at joints, and global deformations. Simulating deformable avatars ensures a more realistic simulation of virtual humans. The research requires models for capturing of geometrie and kinematic data, the synthesis of the realistic human shape and motion, the parametrisation and motion retargeting, and several appropriate deformation models. Once a deformable avatar has been created and animated, the researcher must model high-level behavior and introduce agent technology. The book can be divided into 5 subtopics: 1. Motion capture and 3D reconstruction 2. Parametrie motion and retargeting 3. Musc1es and deformation models 4. Facial animation and communication 5. High-level behaviors and autonomous agents Most of the papers were presented during the IFIP workshop "DEFORM '2000" that was held at the University of Geneva in December 2000, followed by "A V AT ARS 2000" held at EPFL, Lausanne. The two workshops were sponsored by the "Troisu!me Cycle Romand d'Informatique" and allowed participants to discuss the state of research in these important areas. x Preface We would like to thank IFIP for its support and Yana Lambert from Kluwer Academic Publishers for her advice. Finally, we are very grateful to Zerrin Celebi, who has prepared the edited version of this book and Dr. Laurent Moccozet for his collaboration.
Video Object Extraction and Representation: Theory and Applications is an essential reference for electrical engineers working in video; computer scientists researching or building multimedia databases; video system designers; students of video processing; video technicians; and designers working in the graphic arts. In the coming years, the explosion of computer technology will enable a new form of digital media. Along with broadband Internet access and MPEG standards, this new media requires a computational infrastructure to allow users to grab and manipulate content. The book reviews relevant technologies and standards for content-based processing and their interrelations. Within this overview, the book focuses upon two problems at the heart of the algorithmic/computational infrastructure: video object extraction, or how to automatically package raw visual information by content; and video object representation, or how to automatically index and catalogue extracted content for browsing and retrieval.The book analyzes the designs of two novel, working systems for content-based extraction and representation in the support of MPEG-4 and MPEG-7 video standards, respectively. Features of the book include: * Overview of MPEG standards; * A working system for automatic video object segmentation; * A working system for video object query by shape; * Novel technology for a wide range of recognition problems; * Overview of neural network and vision technologies Video Object Extraction and Representation: Theory and Applications will be of interest to research scientists and practitioners working in fields related to the topic. It may also be used as an advanced-level graduate text.
The Language of Mathematics was awarded the E.W. Beth Dissertation Prize for outstanding dissertations in the fields of logic, language, and information. It innovatively combines techniques from linguistics, philosophy of mathematics, and computation to give the first wide-ranging analysis of mathematical language. It focuses particularly on a method for determining the complete meaning of mathematical texts and on resolving technical deficiencies in all standard accounts of the foundations of mathematics. "The thesis does far more than is required for a PhD: it is more like a lifetime's work packed into three years, and is a truly exceptional achievement." Timothy Gowers
A color time-varying image can be described as a three-dimensional vector (representing the colors in an appropriate color space) defined on a three-dimensional spatiotemporal space. In conventional analog television a one-dimensional signal suitable for transmission over a communication channel is obtained by sampling the scene in the vertical and tem poral directions and by frequency-multiplexing the luminance and chrominance informa tion. In digital processing and transmission systems, sampling is applied in the horizontal direction, too, on a signal which has been already scanned in the vertical and temporal directions or directly in three dimensions when using some solid-state sensor. As a conse quence, in recent years it has been considered quite natural to assess the potential advan tages arising from an entire multidimensional approach to the processing of video signals. As a simple but significant example, a composite color video signal, such as the conven tional PAL or NTSC signal, possesses a three-dimensional spectrum which, by using suitable three-dimensional filters, permits horizontal sampling at a rate which is less than that re quired for correctly sampling the equivalent one-dimensional signal. More recently it has been widely recognized that the improvement of the picture quality in current and advanced television systems requires well-chosen signal processing algorithms which are multidimen sional in nature within the demanding constraints of a real-time implementation.
Super-Resolution Imaging serves as an essential reference for both academicians and practicing engineers. It can be used both as a text for advanced courses in imaging and as a desk reference for those working in multimedia, electrical engineering, computer science, and mathematics. The first book to cover the new research area of super-resolution imaging, this text includes work on the following groundbreaking topics: * Image zooming based on wavelets and generalized interpolation; * Super-resolution from sub-pixel shifts; * Use of blur as a cue; * Use of warping in super-resolution; * Resolution enhancement using multiple apertures; * Super-resolution from motion data; * Super-resolution from compressed video; * Limits in super-resolution imaging. Written by the leading experts in the field, Super-Resolution Imaging presents a comprehensive analysis of current technology, along with new research findings and directions for future work.
Video segmentation is the most fundamental process for appropriate index ing and retrieval of video intervals. In general, video streams are composed 1 of shots delimited by physical shot boundaries. Substantial work has been done on how to detect such shot boundaries automatically (Arman et aI. , 1993) (Zhang et aI. , 1993) (Zhang et aI. , 1995) (Kobla et aI. , 1997). Through the inte gration of technologies such as image processing, speech/character recognition and natural language understanding, keywords can be extracted and associated with these shots for indexing (Wactlar et aI. , 1996). A single shot, however, rarely carries enough amount of information to be meaningful by itself. Usu ally, it is a semantically meaningful interval that most users are interested in re trieving. Generally, such meaningful intervals span several consecutive shots. There hardly exists any efficient and reliable technique, either automatic or manual, to identify all semantically meaningful intervals within a video stream. Works by (Smith and Davenport, 1992) (Oomoto and Tanaka, 1993) (Weiss et aI. , 1995) (Hjelsvold et aI. , 1996) suggest manually defining all such inter vals in the database in advance. However, even an hour long video may have an indefinite number of meaningful intervals. Moreover, video data is multi interpretative. Therefore, given a query, what is a meaningful interval to an annotator may not be meaningful to the user who issues the query. In practice, manual indexing of meaningful intervals is labour intensive and inadequate.
Optical character recognition (OCR) is the most prominent and successful example of pattern recognition to date. There are thousands of research papers and dozens of OCR products. Optical Character Rcognition: An Illustrated Guide to the Frontier offers a perspective on the performance of current OCR systems by illustrating and explaining actual OCR errors. The pictures and analysis provide insight into the strengths and weaknesses of current OCR systems, and a road map to future progress. Optical Character Recognition: An Illustrated Guide to the Frontier will pique the interest of users and developers of OCR products and desktop scanners, as well as teachers and students of pattern recognition, artificial intelligence, and information retrieval. The first chapter compares the character recognition abilities of humans and computers. The next four chapters present 280 illustrated examples of recognition errors, in a taxonomy consisting of Imaging Defects, Similar Symbols, Punctuation, and Typography. These examples were drawn from large-scale tests conducted by the authors. The final chapter discusses possible approaches for improving the accuracy of today's systems, and is followed by an annotated bibliography. Optical Character Recognition: An Illustrated Guide to the Frontier is suitable as a secondary text for a graduate level course on pattern recognition, artificial intelligence, and information retrieval, and as a reference for researchers and practitioners in industry. |
You may like...
Advanced Signal Processing for Industry…
Irshad Ahmad Ansari, Varun Bajaj
Hardcover
R3,230
Discovery Miles 32 300
Machine Learning Techniques for Pattern…
Mohit Dua, Ankit Kumar Jain
Hardcover
R8,415
Discovery Miles 84 150
Cybernetics, Cognition and Machine…
Vinit Kumar Gunjan, P.N Suganthan, …
Hardcover
R5,503
Discovery Miles 55 030
|