![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Computer vision
Scale is a concept the antiquity of which can hardly be traced. Certainly the familiar phenomena that accompany sc ale changes in optical patterns are mentioned in the earliest written records. The most obvious topological changes such as the creation or annihilation of details have been a topic to philosophers, artists and later scientists. This appears to of fascination be the case for all cultures from which extensive written records exist. For th instance, chinese 17 c artist manuals remark that "distant faces have no eyes" . The merging of details is also obvious to many authors, e. g. , Lucretius mentions the fact that distant islands look like a single one. The one topo logical event that is (to the best of my knowledge) mentioned only late (by th John Ruskin in his "Elements of drawing" of the mid 19 c) is the splitting of a blob on blurring. The change of images on a gradual increase of resolu tion has been a recurring theme in the arts (e. g. , the poetic description of the distant armada in Calderon's The Constant Prince) and this "mystery" (as Ruskin calls it) is constantly exploited by painters.
After 20 years of pursuing rough set theory and its applications a look on its present state and further prospects is badly needed. The monograph Rough Set Theory and Granular Computing edited by Masahiro Inuiguchi, Shoji Hirano and Shusaku Tsumoto meets this demand. It presents the newest developments in this area and gives fair picture of the state of the art in this domain. Firstly, in the keynote papers by Zdzislaw Pawlak, Andrzej Skowron and Sankar K. Pal the relationship of rough sets with other important methods of data analysis -Bayes theorem, neuro computing and pattern recognitio- is thoroughly examined. Next, several interesting generalizations of the the ory and new directions of research are presented. Furthermore application of rough sets in data mining, in particular, rule induction methods based on rough set theory is presented and discussed. Further important issue dis cussed in the monograph is rough set based data analysis, including study of decisions making in conflict situations. Last but not least, some recent engi neering applications of rough set theory are given. They include a proposal of rough set processor architecture organization for fast implementation of ba sic rough set operations and discussion of results concerning advanced image processing for unmanned aerial vehicle. Thus the monograph beside presenting wide spectrum of ongoing research in this area also points out new emerging areas of study and applications, which makes it a valuable source of information to all interested in this do main."
3D Face Processing: Modeling, Analysis and Synthesis introduces the
frontiers of 3D face processing techniques. It reviews existing 3D
face processing techniques, including techniques for 3D face
geometry modeling; 3D face motion modeling; and 3D face motion
tracking and animation. Then it discusses a unified framework for
face modeling, analysis and synthesis. In this framework, the
authors present new methods for modeling complex natural facial
motion, as well as face appearance variations due to illumination
and subtle motion. Then the authors apply the framework to face
tracking, expression recognition and face avatar for HCI interface.
They conclude this book with comments on future work in the 3D face
processing framework.
Assembled in this volume is a collection of some of the state-of-the-art methods that are using computer vision and machine learning techniques as applied in robotic applications. Currently there is a gap between research conducted in the computer vision and robotics communities. This volume discusses contrasting viewpoints of computer vision vs. robotics, and provides current and future challenges discussed from a research perspective.
This book covers the MPEG H.264 and MS VC-1 video coding standards as well as issues in broadband video delivery over IP networks. This professional reference is designed for industry practitioners, including video engineers, and professionals in consumer electronics, telecommunications and media compression industries. The book is also suitable as a secondary text for advanced-level students in computer science and electrical engineering.
In recent years 3D geo-information has become an important research area due to the increased complexity of tasks in many geo-scientific applications, such as sustainable urban planning and development, civil engineering, risk and disaster management and environmental monitoring. Moreover, a paradigm of cross-application merging and integrating of 3D data is observed. The problems and challenges facing today's 3D software, generally application-oriented, focus almost exclusively on 3D data transportability issues - the ability to use data originally developed in one modelling/visualisation system in other and vice versa. Tools for elaborated 3D analysis, simulation and prediction are either missing or, when available, dedicated to specific tasks. In order to respond to this increased demand, a new type of system has to be developed. A fully developed 3D geo-information system should be able to manage 3D geometry and topology, to integrate 3D geometry and thematic information, to analyze both spatial and topological relationships, and to present the data in a suitable form. In addition to the simple geometry types like point line and polygon, a large variety of parametric representations, freeform curves and surfaces or sweep shapes have to be supported. Approaches for seamless conversion between 3D raster and 3D vector representations should be available, they should allow analysis of a representation most suitable for a specific application.
In this book, research and development trends of physics, engineering, mathematics and computer sciences in biomedical engineering are presented. Contributions from industry, clinics, universities and research labs with foci on medical imaging (CT, MRT, US, PET, SPECT etc.), medical image processing (segmentation, registration, visualization etc.), computer-assisted surgery (medical robotics, navigation), biomechanics (motion analysis, accident research, computer in sports, ergonomics etc.), biomedical optics (OCT, soft-tissue optics, optical monitoring etc.) and laser medicine (tissue ablation, gas analytics, topometry etc.) give insight to recent engineering, clinical and mathematical studies.
Although computer graphics games and animations have been popular for more than a decade, recently personal computers evolved to support real-time, realistic-looking interactive games. OpenGL, a technology standard to develop CG applications, has had incredible momentum in both the professional and consumer markets. Once only the domain of production houses, OpenGL has grown to be the standard for graphics programming on all platforms, personal computers, and workstations. Now more than ever, people are eager to learn about what it takes to make such productions, and how they can be a part of them. Current literature on how to make movies/games focus more on the technology (OpenGL, DirectX, etc) and their APIs rather than on the principles of computer graphics. However, understanding these principles is the key to dealing with any technology API. The aim of "Principles of Computer Graphics and OpenGL" is to teach readers the principles of computer graphics. Hands-on examples developed in OpenGL illustrate the key concepts, and readers develop a professional animation, following traditional processes used in production houses. By the end of the book, readers will be experts in the principles of computer graphics and OpenGL. They will be able to develop their own professional quality games via the same approach used in production houses.
This book traces progress in photography since the first pinhole, or camera obscura, architecture. The authors describe innovations such as photogrammetry, and omnidirectional vision for robotic navigation. The text shows how new camera architectures create a need to master related projective geometries for calibration, binocular stereo, static or dynamic scene understanding. Written by leading researchers in the field, this book also explores applications of alternative camera architectures.
The application of geometric algebra to the engineering sciences is a young, active subject of research. The promise of this field is that the mathematical structure of geometric algebra together with its descriptive power will result in intuitive and more robust algorithms. This book examines all aspects essential for a successful application of geometric algebra: the theoretical foundations, the representation of geometric constraints, and the numerical estimation from uncertain data. Formally, the book consists of two parts: theoretical foundations and applications. The first part includes chapters on random variables in geometric algebra, linear estimation methods that incorporate the uncertainty of algebraic elements, and the representation of geometry in Euclidean, projective, conformal and conic space. The second part is dedicated to applications of geometric algebra, which include uncertain geometry and transformations, a generalized camera model, and pose estimation. Graduate students, scientists, researchers and practitioners will benefit from this book. The examples given in the text are mostly recent research results, so practitioners can see how to apply geometric algebra to real tasks, while researchers note starting points for future investigations. Students will profit from the detailed introduction to geometric algebra, while the text is supported by the author's visualization software, CLUCalc, freely available online, and a website that includes downloadable exercises, slides and tutorials.
Multimodal Video Characterization and Summarization is a valuable research tool for both professionals and academicians working in the video field. This book describes the methodology for using multimodal audio, image, and text technology to characterize video content. This new and groundbreaking science has led to many advances in video understanding, such as the development of a video summary. Applications and methodology for creating video summaries are described, as well as user-studies for evaluation and testing.
The two-volume proceedings, LNCS 6927 and LNCS 6928, constitute the papers presented at the 13th International Conference on Computer Aided Systems Theory, EUROCAST 2011, held in February 2011 in Las Palmas de Gran Canaria, Spain. The total of 160 papers presented were carefully reviewed and selected for inclusion in the books. The contributions are organized in topical sections on concepts and formal tools; software applications; computation and simulation in modelling biological systems; intelligent information processing; heurist problem solving; computer aided systems optimization; model-based system design, simulation, and verification; computer vision and image processing; modelling and control of mechatronic systems; biomimetic software systems; computer-based methods for clinical and academic medicine; modeling and design of complex digital systems; mobile and autonomous transportation systems; traffic behaviour, modelling and optimization; mobile computing platforms and technologies; and engineering systems applications.
This book constitutes the thoroughly refereed post-conference proceedings of the 18th Annual International Workshop on Selected Areas in Cryptography, SAC 2011, held in Toronto, Canada in August 2011. The 23 revised full papers presented together with 2 invited papers were carefully reviewed and selected from 92 submissions. The papers are organized in topical sections on cryptanalysis of hash functions, security in clouds, bits and randomness, cryptanalysis of ciphers, cryptanalysis of public-key crypthography, cipher implementation, new designs and mathematical aspects of applied cryptography.
Ambient Intelligence is a vision of the future where the world will be surrounded by electronic environments sensitive and responsive to people, wherein devices work in concert to support people in carrying out their everyday life activities, in an easy and natural way. This edited volume is based on the workshop Multimedia Techniques for Ambient Intelligence (MTDAI08), held in Mogliano Veneto, Italy in March 2008. Contributed by world renowned leaders in the field from academia and industry, this volume is dedicated to research on technologies used to improve the intelligence capability of multimedia devices for imaging, image processing and computer vision. Focuses on recent developments in digital signal processing, including evolutions in audiovisual signal processing, analysis, coding and authentication, and retrieval techniques. Designed for researchers and professionals, this book is also suitable for advanced-level students in computer science and electrical engineering.
This book contains the carefully selected and reviewed papers presented at three satellite events that were held in conjunction with the 11th International Conference on Web Information Systems Engineering, WISE 2010, in Hong Kong, China, in December 2010. The collection comprises a total of 40 contributions that originate from the First International Symposium on Web Intelligent Systems and Services (WISS 2010), from the First International Workshop on Cloud Information Systems Engineering (CISE 2010) and from the Second International Workshop on Mobile Business Collaboration (MBC 2010). The papers address a wide range of hot topics and are organized in topical sections on: decision and e-markets; rules and XML; web service intelligence; semantics and services; analyzing web resources; engineering web systems; intelligent web applications; web communities and personalization; cloud information system engineering; mobile business collaboration.
Abstract Biological vision is a rather fascinating domain of research. Scientists of various origins like biology, medicine, neurophysiology, engineering, math ematics, etc. aim to understand the processes leading to visual perception process and at reproducing such systems. Understanding the environment is most of the time done through visual perception which appears to be one of the most fundamental sensory abilities in humans and therefore a significant amount of research effort has been dedicated towards modelling and repro ducing human visual abilities. Mathematical methods play a central role in this endeavour. Introduction David Marr's theory v DEGREESas a pioneering step tov DEGREESards understanding visual percep tion. In his view human vision was based on a complete surface reconstruction of the environment that was then used to address visual subtasks. This approach was proven to be insufficient by neuro-biologists and complementary ideas from statistical pattern recognition and artificial intelligence were introduced to bet ter address the visual perception problem. In this framework visual perception is represented by a set of actions and rules connecting these actions. The emerg ing concept of active vision consists of a selective visual perception paradigm that is basically equivalent to recovering from the environment the minimal piece information required to address a particular task of interest."
This book presents the most recent achievements in some rapidly developing fields within Computer Science. This includes the very latest research in biometrics and computer security systems, and descriptions of the latest inroads in artificial intelligence applications. The book contains over 30 articles by well-known scientists and engineers. The articles are extended versions of works introduced at the ACS-CISIM 2005 conference.
As our heritage deteriorates through erosion, human error or natural disasters, it has become more important than ever to preserve our past - even if it is in digital form only. This highly relevant work describes thorough research and methods for preserving cultural heritage objects through the use of 3D digital data. These methods were developed via computer vision and computer graphics technologies. They offer a way of passing our heritage down to future generations.
Biomolecular sequence comparison is the origin of bioinformatics. This book gives a complete in-depth treatment of the study of sequence comparison. A comprehensive introduction is followed by a focus on alignment algorithms and techniques, proceeded by a discussion of the theory. The book examines alignment methods and techniques, features a new issue of sequence comparison - the spaced seed technique, addresses several new flexible strategies for coping with various scoring schemes, and covers the theory on the significance of high-scoring segment pairs between two unalignment sequences. Useful appendices on basic concepts in molecular biology, primer in statistics and software for sequence alignment are included in this reader-friendly text, as well as chapter-ending exercise and research questions A state-of-the-art study of sequence alignment and homology search, this is an ideal reference for advanced students studying bioinformatics and will appeal to biologists who wish to know how to use homology search tools.
Gaussian scale-space is one of the best understood multi-resolution techniques available to the computer vision and image analysis community. It is the purpose of this book to guide the reader through some of its main aspects. During an intensive weekend in May 1996 a workshop on Gaussian scale-space theory was held in Copenhagen, which was attended by many of the leading experts in the field. The bulk of this book originates from this workshop. Presently there exist only two books on the subject. In contrast to Lindeberg's monograph (Lindeberg, 1994e) this book collects contributions from several scale space researchers, whereas it complements the book edited by ter Haar Romeny (Haar Romeny, 1994) on non-linear techniques by focusing on linear diffusion. This book is divided into four parts. The reader not so familiar with scale-space will find it instructive to first consider some potential applications described in Part 1. Parts II and III both address fundamental aspects of scale-space. Whereas scale is treated as an essentially arbitrary constant in the former, the latter em phasizes the deep structure, i.e. the structure that is revealed by varying scale. Finally, Part IV is devoted to non-linear extensions, notably non-linear diffusion techniques and morphological scale-spaces, and their relation to the linear case. The Danish National Science Research Council is gratefully acknowledged for providing financial support for the workshop under grant no. 9502164."
This is the first edited book that deals with the special topic of signals and images within Case-Based Reasoning (CBR). Signal-interpreting systems are becoming increasingly popular in medical, industrial, ecological, biotechnological and many other applications. Existing statistical and knowledge-based techniques lack robustness, accuracy and flexibility. New strategies are needed that can adapt to changing environmental conditions, signal variation, user needs and process requirements. Introducing CBR strategies into signal-interpreting systems can satisfy these requirements.
The area of content-based video retrieval is a very hot area both for research and for commercial applications. In order to design effective video databases for applications such as digital libraries, video production, and a variety of Internet applications, there is a great need to develop effective techniques for content-based video retrieval. One of the main issues in this area of research is how to bridge the semantic gap between low-Ievel features extracted from a video (such as color, texture, shape, motion, and others) and semantics that describe video concept on a higher level. In this book, Dr. Milan Petkovi6 and Prof. Dr. Willem Jonker have addressed this issue by developing and describing several innovative techniques to bridge the semantic gap. The main contribution of their research, which is the core of the book, is the development of three techniques for bridging the semantic gap: (1) a technique that uses the spatio-temporal extension of the Cobra framework, (2) a technique based on hidden Markov models, and (3) a technique based on Bayesian belief networks. To evaluate performance of these techniques, the authors have conducted a number of experiments using real video data. The book also discusses domains solutions versus general solution of the problem. Petkovi6 and Jonker proposed a solution that allows a system to be applied in multiple domains with minimal adjustments. They also designed and described a prototype video database management system, which is based on techniques they proposed in the book.
Invariant, or coordinate-free methods provide a natural framework for many geometric questions. Invariant Methods in Discrete and Computational Geometry provides a basic introduction to several aspects of invariant theory, including the supersymmetric algebra, the Grassmann-Cayler algebra, and Chow forms. It also presents a number of current research papers on invariant theory and its applications to problems in geometry, such as automated theorem proving and computer vision. Audience: Researchers studying mathematics, computers and robotics.
As a graduate student at Ohio State in the mid-1970s, I inherited a unique c- puter vision laboratory from the doctoral research of previous students. They had designed and built an early frame-grabber to deliver digitized color video from a (very large) electronic video camera on a tripod to a mini-computer (sic) with a (huge ) disk drive-about the size of four washing machines. They had also - signed a binary image array processor and programming language, complete with a user's guide, to facilitate designing software for this one-of-a-kindprocessor. The overall system enabled programmable real-time image processing at video rate for many operations. I had the whole lab to myself. I designed software that detected an object in the eldofview, trackeditsmovementsinrealtime, anddisplayedarunningdescription of the events in English. For example: "An object has appeared in the upper right corner...Itismovingdownandtotheleft...Nowtheobjectisgettingcloser...The object moved out of sight to the left"-about like that. The algorithms were simple, relying on a suf cient image intensity difference to separate the object from the background (a plain wall). From computer vision papers I had read, I knew that vision in general imaging conditions is much more sophisticated. But it worked, it was great fun, and I was hooked. |
You may like...
Lighter than Air Robots - Guidance and…
Yasmina Bestaoui Sebbane
Hardcover
R2,675
Discovery Miles 26 750
Implementing Evidence-Based Academic…
Sylvia Rosenfield, Virginia Wise Berninger
Hardcover
R2,433
Discovery Miles 24 330
Artificial Intelligence Trends for Data…
Mamata Rath, Nguyen Thi Dieu Linh, …
Hardcover
R4,546
Discovery Miles 45 460
Personal Assistants: Emerging…
Angelo Costa, Vicente Julian, …
Hardcover
Advanced Applications of Blockchain…
Shiho Kim, Ganesh Chandra Deka
Hardcover
R4,639
Discovery Miles 46 390
|