![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Pattern recognition
A state-of-the-art research monograph providing consistent treatment of supervisory control, by one of the world's leading groups in the area of Bayesian identification, control, and decision making.
This comprehensive and authoritative text/reference presents a unique, multidisciplinary perspective on Shape Perception in Human and Computer Vision. Rather than focusing purely on the state of the art, the book provides viewpoints from world-class researchers reflecting broadly on the issues that have shaped the field. Drawing upon many years of experience, each contributor discusses the trends followed and the progress made, in addition to identifying the major challenges that still lie ahead. Topics and features: examines each topic from a range of viewpoints, rather than promoting a specific paradigm; discusses topics on contours, shape hierarchies, shape grammars, shape priors, and 3D shape inference; reviews issues relating to surfaces, invariants, parts, multiple views, learning, simplicity, shape constancy and shape illusions; addresses concepts from the historically separate disciplines of computer vision and human vision using the same "language" and methods.
The new computing environment enabled by advances in service oriented arc- tectures, mashups, and cloud computing will consist of service spaces comprising data, applications, infrastructure resources distributed over the Web. This envir- ment embraces a holistic paradigm in which users, services, and resources establish on-demand interactions, possibly in real-time, to realise useful experiences. Such interactions obtain relevant services that are targeted to the time and place of the user requesting the service and to the device used to access it. The bene?t of such environment originates from the added value generated by the possible interactions in a large scale rather than by the capabilities of its individual components se- rately. This offers tremendous automation opportunities in a variety of application domains including execution of forecasting, of?ce tasks, travel support, intelligent information gathering and analysis, environment monitoring, healthcare, e-business, community based systems, e-science and e-government. A key feature of this environment is the ability to dynamically compose services to realise user tasks. While recent advances in service discovery, composition and Semantic Web technologies contribute necessary ?rst steps to facilitate this task, the bene?ts of composition are still limited to take advantages of large-scale ubiq- tous environments. The main stream composition techniques and technologies rely on human understanding and manual programming to compose and aggregate s- vices. Recent advances improve composition by leveraging search technologies and ?ow-based composition languages as in mashups and process-centric service c- position.
This book constitutes the first part of the refereed proceedings of the International Conference on Life System Modeling and Simulation, LSMS 2014, and of the International Conference on Intelligent Computing for Sustainable Energy and Environment, ICSEE 2014, held in Shanghai, China, in September 2014. The 159 revised full papers presented in the three volumes of CCIS 461-463 were carefully reviewed and selected from 572 submissions. The papers of this volume are organized in topical sections on biomedical signal processing, imaging, and visualization; computational methods and intelligence in modeling genetic and chemical networks and regulation; computational methods and intelligence in organism modeling; computational methods and intelligence in modeling and design of synthetic biological systems; computational methods and intelligence in biomechanical systems, tissue engineering and clinical bioengineering; intelligent medical apparatus and clinical applications; modeling and simulation of societies and collective behaviour; innovative education in systems modeling and simulation; data analysis and data mining of biosignals; feature selection; robust optimization and data analysis.
This book constitutes the proceedings of the 11th International Conference on Modeling Decisions for Artificial Intelligence, MDAI 2014, held in Tokyo, Japan, in October 2014. The 19 revised full papers presented together with an invited paper were carefully reviewed and selected from 38 submissions. They deal with the theory and tools for modeling decisions, as well as applications that encompass decision making processes and information fusion techniques and are organized in topical sections on aggregation operators and decision making, optimization, clustering and similarity, and data mining and data privacy.
Due to the fast growth of the Web and the difficulties in finding desired information, efficient and effective information retrieval systems have become more important than ever, and the search engine has become an essential tool for many people. The ranker, a central component in every search engine, is responsible for the matching between processed queries and indexed documents. Because of its central role, great attention has been paid to the research and development of ranking technologies. In addition, ranking is also pivotal for many other information retrieval applications, such as collaborative filtering, definition ranking, question answering, multimedia retrieval, text summarization, and online advertisement. Leveraging machine learning technologies in the ranking process has led to innovative and more effective ranking models, and eventually to a completely new research area called "learning to rank". Liu first gives a comprehensive review of the major approaches to learning to rank. For each approach he presents the basic framework, with example algorithms, and he discusses its advantages and disadvantages. He continues with some recent advances in learning to rank that cannot be simply categorized into the three major approaches - these include relational ranking, query-dependent ranking, transfer ranking, and semisupervised ranking. His presentation is completed by several examples that apply these technologies to solve real information retrieval problems, and by theoretical discussions on guarantees for ranking performance. This book is written for researchers and graduate students in both information retrieval and machine learning. They will find here the only comprehensive description of the state of the art in a field that has driven the recent advances in search engine development.
This book constitutes the proceedings of the 25th International Conference on Algorithmic Learning Theory, ALT 2014, held in Bled, Slovenia, in October 2014, and co-located with the 17th International Conference on Discovery Science, DS 2014. The 21 papers presented in this volume were carefully reviewed and selected from 50 submissions. In addition the book contains 4 full papers summarizing the invited talks. The papers are organized in topical sections named: inductive inference; exact learning from queries; reinforcement learning; online learning and learning with bandit information; statistical learning theory; privacy, clustering, MDL, and Kolmogorov complexity.
This three-volume set LNAI 8724, 8725 and 8726 constitutes the refereed proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: ECML PKDD 2014, held in Nancy, France, in September 2014. The 115 revised research papers presented together with 13 demo track papers, 10 nectar track papers, 8 PhD track papers, and 9 invited talks were carefully reviewed and selected from 550 submissions. The papers cover the latest high-quality interdisciplinary research results in all areas related to machine learning and knowledge discovery in databases.
Information theory has proved to be effective for solving many computer vision and pattern recognition (CVPR) problems (such as image matching, clustering and segmentation, saliency detection, feature selection, optimal classifier design and many others). Nowadays, researchers are widely bringing information theory elements to the CVPR arena. Among these elements there are measures (entropy, mutual information...), principles (maximum entropy, minimax entropy...) and theories (rate distortion theory, method of types...). This book explores and introduces the latter elements through an incremental complexity approach at the same time where CVPR problems are formulated and the most representative algorithms are presented. Interesting connections between information theory principles when applied to different problems are highlighted, seeking a comprehensive research roadmap. The result is a novel tool both for CVPR and machine learning researchers, and contributes to a cross-fertilization of both areas.
This is an application-oriented book includes debugged & efficient C implementations of real-world algorithms, in a variety of languages/environments, offering unique coverage of embedded image processing. covers TI technologies and applies them to an important market (important: features the C6416 DSK) Also covers the EVM should not be lost, especially the C6416 DSK, a much more recent DSP. Algorithms treated here are frequently missing from other image processing texts, in particular Chapter 6 (Wavelets), moreover, efficient fixed-point implementations of wavelet-based algorithms also treated. Provide numerous Visual Studio .NET 2003 C/C++ code, that show how to use MFC, GDI+, and the Intel IPP library to prototype image processing applications
This book constitutes the proceedings of the International Conference on Adaptive and Intelligent Systems, ICAIS 2014, held in Bournemouth, UK, in September 2014. The 19 full papers included in these proceedings together with the abstracts of 4 invited talks, were carefully reviewed and selected from 32 submissions. The contributions are organized under the following topical sections: advances in feature selection; clustering and classification; adaptive optimization; advances in time series analysis.
This book constitutes the refereed proceedings of the 13th International Conference on Parallel Problem Solving from Nature, PPSN 2013, held in Ljubljana, Slovenia, in September 2014. The total of 90 revised full papers were carefully reviewed and selected from 217 submissions. The meeting began with 7 workshops which offered an ideal opportunity to explore specific topics in evolutionary computation, bio-inspired computing and metaheuristics. PPSN XIII also included 9 tutorials. The papers are organized in topical sections on adaption, self-adaption and parameter tuning; classifier system, differential evolution and swarm intelligence; coevolution and artificial immune systems; constraint handling; dynamic and uncertain environments; estimation of distribution algorithms and metamodelling; genetic programming; multi-objective optimisation; parallel algorithms and hardware implementations; real world applications; and theory.
This book constitutes the refereed proceedings of the 7th International Conference on Similarity Search and Applications, SISAP 2014, held in A Coruna, Spain, in October 2014. The 21 full papers and 6 short papers presented were carefully reviewed and selected from 45 submissions. The papers are organized in topical sections on Improving Similarity Search Methods and Techniques; Indexing and Applications; Metrics and Evaluation; New Scenarios and Approaches; Applications and Specific Domains.
The LNCS journal Transactions on Rough Sets is devoted to the entire spectrum of rough sets related issues, from logical and mathematical foundations, through all aspects of rough set theory and its applications, such as data mining, knowledge discovery, and intelligent information processing, to relations between rough sets and other approaches to uncertainty, vagueness, and incompleteness, such as fuzzy sets and theory of evidence. Volume XVIII includes extensions of papers from the Joint Rough Set Symposium (JRS 2012), which was held in Chengdu, China, in August 2012. The seven papers that constitute this volume deal with topics such as: rough fuzzy sets, intuitionistic fuzzy sets, multi-granulation rough sets, decision-theoretic rough sets, three-way decisions and their applications in attribute reduction, feature selection, overlapping clustering, data mining, cost-sensitive learning, face recognition, and spam filtering.
This book constitutes the refereed proceedings of the 15th IFIP TC 6/TC 11 International Conference on Communications and Multimedia Security, CMS 2014, held in Aveiro, Portugal, in September 2014. The 4 revised full papers presented together with 6 short papers, 3 extended abstracts describing the posters that were discussed at the conference, and 2 keynote talks were carefully reviewed and selected from 22 submissions. The papers are organized in topical sections on vulnerabilities and threats, identification and authentification, applied security.
Automatic personal authentication using biometric information is becoming more essential in applications of public security, access control, forensics, banking, etc. Many kinds of biometric authentication techniques have been developed based on different biometric characteristics. However, most of the physical biometric recognition techniques are based on two dimensional (2D) images, despite the fact that human characteristics are three dimensional (3D) surfaces. Recently, 3D techniques have been applied to biometric applications such as 3D face, 3D palmprint, 3D fingerprint, and 3D ear recognition. This book introduces four typical 3D imaging methods, and presents some case studies in the field of 3D biometrics. This book also includes many efficient 3D feature extraction, matching, and fusion algorithms. These 3D imaging methods and their applications are given as follows: - Single view imaging with line structured-light: 3D ear identification - Single view imaging with multi-line structured-light: 3D palmprint authentication - Single view imaging using only 3D camera: 3D hand verification - Multi-view imaging: 3D fingerprint recognition 3D Biometrics: Systems and Applications is a comprehensive introduction to both theoretical issues and practical implementation in 3D biometric authentication. It will serve as a textbook or as a useful reference for graduate students and researchers in the fields of computer science, electrical engineering, systems science, and information technology. Researchers and practitioners in industry and R&D laboratories working on security system design, biometrics, immigration, law enforcement, control, and pattern recognition will also find much of interest in this book.
Micromechanical manufacturing based on microequipment creates new possibi- ties in goods production. If microequipment sizes are comparable to the sizes of the microdevices to be produced, it is possible to decrease the cost of production drastically. The main components of the production cost - material, energy, space consumption, equipment, and maintenance - decrease with the scaling down of equipment sizes. To obtain really inexpensive production, labor costs must be reduced to almost zero. For this purpose, fully automated microfactories will be developed. To create fully automated microfactories, we propose using arti?cial neural networks having different structures. The simplest perceptron-like neural network can be used at the lowest levels of microfactory control systems. Adaptive Critic Design, based on neural network models of the microfactory objects, can be used for manufacturing process optimization, while associative-projective neural n- works and networks like ART could be used for the highest levels of control systems. We have examined the performance of different neural networks in traditional image recognition tasks and in problems that appear in micromechanical manufacturing. We and our colleagues also have developed an approach to mic- equipment creation in the form of sequential generations. Each subsequent gene- tion must be of a smaller size than the previous ones and must be made by previous generations. Prototypes of ?rst-generation microequipment have been developed and assessed.
This book considers biometric technology in a broad light, integrating the concept seamlessly into mainstream IT, while discussing the cultural attitudes and the societal impact of identity management. Features: summarizes the material covered at the beginning of every chapter, and provides chapter-ending review questions and discussion points; reviews identity verification in nature, and early historical interest in anatomical measurement; provides an overview of biometric technology, presents a focus on biometric systems and true systems integration, examines the concept of identity management, and predicts future trends; investigates performance issues in biometric systems, the management and security of biometric data, and the impact of mobile devices on biometrics technology; explains the equivalence of performance across operational nodes, introducing the APEX system; considers the legal, political and societal factors of biometric technology, in addition to user psychology and other human factors.
A natural evolution of statistical signal processing, in connection with the progressive increase in computational power, has been exploiting higher-order information. Thus, high-order spectral analysis and nonlinear adaptive filtering have received the attention of many researchers. One of the most successful techniques for non-linear processing of data with complex non-Gaussian distributions is the independent component analysis mixture modelling (ICAMM). This thesis defines a novel formalism for pattern recognition and classification based on ICAMM, which unifies a certain number of pattern recognition tasks allowing generalization. The versatile and powerful framework developed in this work can deal with data obtained from quite different areas, such as image processing, impact-echo testing, cultural heritage, hypnograms analysis, web-mining and might therefore be employed to solve many different real-world problems.
This book constitutes the refereed proceedings of the 7th International Conference on Artificial General Intelligence, AGI 2014, held in Quebec City, QC, Canada, in August 2014. The 22 papers and 8 posters were carefully reviewed and selected from 65 submissions. Researchers have recognized the necessity of returning to the original goals of the field by treating intelligence as a whole. Increasingly, there is a call for a transition back to confronting the more difficult issues of "human-level intelligence" and more broadly artificial general intelligence. AGI research differs from the ordinary AI research by stressing on the versatility and wholeness of intelligence and by carrying out the engineering practice according to an outline of a system comparable to the human mind in a certain sense. The AGI conference series has played and continues to play, a significant role in this resurgence of research on artificial intelligence in the deeper, original sense of the term of "artificial intelligence". The conferences encourage interdisciplinary research based on different understandings of intelligence and exploring different approaches.
This book constitutes the refereed proceedings of the 10th International Symposium on Bioinformatics Research and Applications, ISBRA 2014, held in Zhangjiajie, China, in June 2014. The 33 revised full papers and 31 one-page abstracts included in this volume were carefully reviewed and selected from 119 submissions. The papers cover a wide range of topics in bioinformatics and computational biology and their applications including the development of experimental or commercial systems.
This book comprises chapters on key problems in machine learning and signal processing arenas. The contents of the book are a result of a 2014 Workshop on Machine Intelligence and Signal Processing held at the Indraprastha Institute of Information Technology. Traditionally, signal processing and machine learning were considered to be separate areas of research. However in recent times the two communities are getting closer. In a very abstract fashion, signal processing is the study of operator design. The contributions of signal processing had been to device operators for restoration, compression, etc. Applied Mathematicians were more interested in operator analysis. Nowadays signal processing research is gravitating towards operator learning - instead of designing operators based on heuristics (for example wavelets), the trend is to learn these operators (for example dictionary learning). And thus, the gap between signal processing and machine learning is fast converging. The 2014 Workshop on Machine Intelligence and Signal Processing was one of the few unique events that are focused on the convergence of the two fields. The book is comprised of chapters based on the top presentations at the workshop. This book has three chapters on various topics of biometrics - two are on face detection and one on iris recognition; all from top researchers in their field. There are four chapters on different biomedical signal / image processing problems. Two of these are on retinal vessel classification and extraction; one on biomedical signal acquisition and the fourth one on region detection. There are three chapters on data analysis - a topic gaining immense popularity in industry and academia. One of these shows a novel use of compressed sensing in missing sales data interpolation. Another chapter is on spam detection and the third one is on simple one-shot movie rating prediction. Four other chapters cover various cutting edge miscellaneous topics on character recognition, software effort prediction, speech recognition and non-linear sparse recovery. The contents of this book will prove useful to researchers, professionals and students in the domains of machine learning and signal processing.
Fuzzy classifiers are important tools in exploratory data analysis, which is a vital set of methods used in various engineering, scientific and business applications. Fuzzy classifiers use fuzzy rules and do not require assumptions common to statistical classification. Rough set theory is useful when data sets are incomplete. It defines a formal approximation of crisp sets by providing the lower and the upper approximation of the original set. Systems based on rough sets have natural ability to work on such data and incomplete vectors do not have to be preprocessed before classification. To achieve better performance than existing machine learning systems, fuzzy classifiers and rough sets can be combined in ensembles. Such ensembles consist of a finite set of learning models, usually weak learners. The present book discusses the three aforementioned fields - fuzzy systems, rough sets and ensemble techniques. As the trained ensemble should represent a single hypothesis, a lot of attention is placed on the possibility to combine fuzzy rules from fuzzy systems being members of classification ensemble. Furthermore, an emphasis is placed on ensembles that can work on incomplete data, thanks to rough set theory. .
This groundbreaking text examines the problem of user authentication from a completely new viewpoint. Rather than describing the requirements, technologies and implementation issues of designing point-of-entry authentication, the book introduces and investigates the technological requirements of implementing transparent user authentication - where authentication credentials are captured during a user's normal interaction with a system. This approach would transform user authentication from a binary point-of-entry decision to a continuous identity confidence measure. Topics and features: discusses the need for user authentication; reviews existing authentication approaches; introduces novel behavioural biometrics techniques; examines the wider system-specific issues with designing large-scale multimodal authentication systems; concludes with a look to the future of user authentication.
Biometric recognition, or simply biometrics, is the science of establishing the identity of a person based on physical or behavioral attributes. It is a rapidly evolving field with applications ranging from securely accessing one's computer to gaining entry into a country. While the deployment of large-scale biometric systems in both commercial and government applications has increased the public awareness of this technology, "Introduction to Biometrics" is the first textbook to introduce the fundamentals of Biometrics to undergraduate/graduate students. The three commonly used modalities in the biometrics field, namely, fingerprint, face, and iris are covered in detail in this book. Few other modalities like hand geometry, ear, and gait are also discussed briefly along with advanced topics such as multibiometric systems and security of biometric systems. Exercises for each chapter will be available on the book website to help students gain a better understanding of the topics and obtain practical experience in designing computer programs for biometric applications. These can be found at: http://www.csee.wvu.edu/~ross/BiometricsTextBook/. Designed for undergraduate and graduate students in computer science and electrical engineering, "Introduction to Biometrics" is also suitable for researchers and biometric and computer security professionals. |
You may like...
Matrix Diagonal Stability in Systems and…
Eugenius Kaszkurewicz, Amit Bhaya
Hardcover
R2,806
Discovery Miles 28 060
Mass Transportation Problems…
Svetlozar T. Rachev, Ludger Ruschendorf
Hardcover
R4,889
Discovery Miles 48 890
Linear Discrete-Time Systems
Zoran M. Buchevats, Lyubomir T. Gruyitch
Hardcover
R5,364
Discovery Miles 53 640
Positive Operator Semigroups - From…
Andr as B atkai, Marjeta Kramar Fijavz, …
Hardcover
Control Theory for Linear Systems
Harry L. Trentelman, Anton A. Stoorvogel, …
Hardcover
R4,228
Discovery Miles 42 280
Differential Equations in Engineering…
Nupur Goyal, Piotr Kulczycki, …
Hardcover
R4,496
Discovery Miles 44 960
Graphs and Algorithms in Communication…
Arie Koster, Xavier Munoz
Hardcover
R4,097
Discovery Miles 40 970
|