![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Pattern recognition
This book constitutes the proceedings of the 11th International Conference on Modeling Decisions for Artificial Intelligence, MDAI 2014, held in Tokyo, Japan, in October 2014. The 19 revised full papers presented together with an invited paper were carefully reviewed and selected from 38 submissions. They deal with the theory and tools for modeling decisions, as well as applications that encompass decision making processes and information fusion techniques and are organized in topical sections on aggregation operators and decision making, optimization, clustering and similarity, and data mining and data privacy.
Due to the fast growth of the Web and the difficulties in finding desired information, efficient and effective information retrieval systems have become more important than ever, and the search engine has become an essential tool for many people. The ranker, a central component in every search engine, is responsible for the matching between processed queries and indexed documents. Because of its central role, great attention has been paid to the research and development of ranking technologies. In addition, ranking is also pivotal for many other information retrieval applications, such as collaborative filtering, definition ranking, question answering, multimedia retrieval, text summarization, and online advertisement. Leveraging machine learning technologies in the ranking process has led to innovative and more effective ranking models, and eventually to a completely new research area called "learning to rank". Liu first gives a comprehensive review of the major approaches to learning to rank. For each approach he presents the basic framework, with example algorithms, and he discusses its advantages and disadvantages. He continues with some recent advances in learning to rank that cannot be simply categorized into the three major approaches - these include relational ranking, query-dependent ranking, transfer ranking, and semisupervised ranking. His presentation is completed by several examples that apply these technologies to solve real information retrieval problems, and by theoretical discussions on guarantees for ranking performance. This book is written for researchers and graduate students in both information retrieval and machine learning. They will find here the only comprehensive description of the state of the art in a field that has driven the recent advances in search engine development.
This book constitutes the proceedings of the 25th International Conference on Algorithmic Learning Theory, ALT 2014, held in Bled, Slovenia, in October 2014, and co-located with the 17th International Conference on Discovery Science, DS 2014. The 21 papers presented in this volume were carefully reviewed and selected from 50 submissions. In addition the book contains 4 full papers summarizing the invited talks. The papers are organized in topical sections named: inductive inference; exact learning from queries; reinforcement learning; online learning and learning with bandit information; statistical learning theory; privacy, clustering, MDL, and Kolmogorov complexity.
This three-volume set LNAI 8724, 8725 and 8726 constitutes the refereed proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: ECML PKDD 2014, held in Nancy, France, in September 2014. The 115 revised research papers presented together with 13 demo track papers, 10 nectar track papers, 8 PhD track papers, and 9 invited talks were carefully reviewed and selected from 550 submissions. The papers cover the latest high-quality interdisciplinary research results in all areas related to machine learning and knowledge discovery in databases.
Information theory has proved to be effective for solving many computer vision and pattern recognition (CVPR) problems (such as image matching, clustering and segmentation, saliency detection, feature selection, optimal classifier design and many others). Nowadays, researchers are widely bringing information theory elements to the CVPR arena. Among these elements there are measures (entropy, mutual information...), principles (maximum entropy, minimax entropy...) and theories (rate distortion theory, method of types...). This book explores and introduces the latter elements through an incremental complexity approach at the same time where CVPR problems are formulated and the most representative algorithms are presented. Interesting connections between information theory principles when applied to different problems are highlighted, seeking a comprehensive research roadmap. The result is a novel tool both for CVPR and machine learning researchers, and contributes to a cross-fertilization of both areas.
This is an application-oriented book includes debugged & efficient C implementations of real-world algorithms, in a variety of languages/environments, offering unique coverage of embedded image processing. covers TI technologies and applies them to an important market (important: features the C6416 DSK) Also covers the EVM should not be lost, especially the C6416 DSK, a much more recent DSP. Algorithms treated here are frequently missing from other image processing texts, in particular Chapter 6 (Wavelets), moreover, efficient fixed-point implementations of wavelet-based algorithms also treated. Provide numerous Visual Studio .NET 2003 C/C++ code, that show how to use MFC, GDI+, and the Intel IPP library to prototype image processing applications
This book constitutes the proceedings of the International Conference on Adaptive and Intelligent Systems, ICAIS 2014, held in Bournemouth, UK, in September 2014. The 19 full papers included in these proceedings together with the abstracts of 4 invited talks, were carefully reviewed and selected from 32 submissions. The contributions are organized under the following topical sections: advances in feature selection; clustering and classification; adaptive optimization; advances in time series analysis.
This book constitutes the refereed proceedings of the 13th International Conference on Parallel Problem Solving from Nature, PPSN 2013, held in Ljubljana, Slovenia, in September 2014. The total of 90 revised full papers were carefully reviewed and selected from 217 submissions. The meeting began with 7 workshops which offered an ideal opportunity to explore specific topics in evolutionary computation, bio-inspired computing and metaheuristics. PPSN XIII also included 9 tutorials. The papers are organized in topical sections on adaption, self-adaption and parameter tuning; classifier system, differential evolution and swarm intelligence; coevolution and artificial immune systems; constraint handling; dynamic and uncertain environments; estimation of distribution algorithms and metamodelling; genetic programming; multi-objective optimisation; parallel algorithms and hardware implementations; real world applications; and theory.
This book constitutes the refereed proceedings of the 7th International Conference on Similarity Search and Applications, SISAP 2014, held in A Coruna, Spain, in October 2014. The 21 full papers and 6 short papers presented were carefully reviewed and selected from 45 submissions. The papers are organized in topical sections on Improving Similarity Search Methods and Techniques; Indexing and Applications; Metrics and Evaluation; New Scenarios and Approaches; Applications and Specific Domains.
The LNCS journal Transactions on Rough Sets is devoted to the entire spectrum of rough sets related issues, from logical and mathematical foundations, through all aspects of rough set theory and its applications, such as data mining, knowledge discovery, and intelligent information processing, to relations between rough sets and other approaches to uncertainty, vagueness, and incompleteness, such as fuzzy sets and theory of evidence. Volume XVIII includes extensions of papers from the Joint Rough Set Symposium (JRS 2012), which was held in Chengdu, China, in August 2012. The seven papers that constitute this volume deal with topics such as: rough fuzzy sets, intuitionistic fuzzy sets, multi-granulation rough sets, decision-theoretic rough sets, three-way decisions and their applications in attribute reduction, feature selection, overlapping clustering, data mining, cost-sensitive learning, face recognition, and spam filtering.
This book constitutes the refereed proceedings of the 15th IFIP TC 6/TC 11 International Conference on Communications and Multimedia Security, CMS 2014, held in Aveiro, Portugal, in September 2014. The 4 revised full papers presented together with 6 short papers, 3 extended abstracts describing the posters that were discussed at the conference, and 2 keynote talks were carefully reviewed and selected from 22 submissions. The papers are organized in topical sections on vulnerabilities and threats, identification and authentification, applied security.
Automatic personal authentication using biometric information is becoming more essential in applications of public security, access control, forensics, banking, etc. Many kinds of biometric authentication techniques have been developed based on different biometric characteristics. However, most of the physical biometric recognition techniques are based on two dimensional (2D) images, despite the fact that human characteristics are three dimensional (3D) surfaces. Recently, 3D techniques have been applied to biometric applications such as 3D face, 3D palmprint, 3D fingerprint, and 3D ear recognition. This book introduces four typical 3D imaging methods, and presents some case studies in the field of 3D biometrics. This book also includes many efficient 3D feature extraction, matching, and fusion algorithms. These 3D imaging methods and their applications are given as follows: - Single view imaging with line structured-light: 3D ear identification - Single view imaging with multi-line structured-light: 3D palmprint authentication - Single view imaging using only 3D camera: 3D hand verification - Multi-view imaging: 3D fingerprint recognition 3D Biometrics: Systems and Applications is a comprehensive introduction to both theoretical issues and practical implementation in 3D biometric authentication. It will serve as a textbook or as a useful reference for graduate students and researchers in the fields of computer science, electrical engineering, systems science, and information technology. Researchers and practitioners in industry and R&D laboratories working on security system design, biometrics, immigration, law enforcement, control, and pattern recognition will also find much of interest in this book.
Micromechanical manufacturing based on microequipment creates new possibi- ties in goods production. If microequipment sizes are comparable to the sizes of the microdevices to be produced, it is possible to decrease the cost of production drastically. The main components of the production cost - material, energy, space consumption, equipment, and maintenance - decrease with the scaling down of equipment sizes. To obtain really inexpensive production, labor costs must be reduced to almost zero. For this purpose, fully automated microfactories will be developed. To create fully automated microfactories, we propose using arti?cial neural networks having different structures. The simplest perceptron-like neural network can be used at the lowest levels of microfactory control systems. Adaptive Critic Design, based on neural network models of the microfactory objects, can be used for manufacturing process optimization, while associative-projective neural n- works and networks like ART could be used for the highest levels of control systems. We have examined the performance of different neural networks in traditional image recognition tasks and in problems that appear in micromechanical manufacturing. We and our colleagues also have developed an approach to mic- equipment creation in the form of sequential generations. Each subsequent gene- tion must be of a smaller size than the previous ones and must be made by previous generations. Prototypes of ?rst-generation microequipment have been developed and assessed.
This book considers biometric technology in a broad light, integrating the concept seamlessly into mainstream IT, while discussing the cultural attitudes and the societal impact of identity management. Features: summarizes the material covered at the beginning of every chapter, and provides chapter-ending review questions and discussion points; reviews identity verification in nature, and early historical interest in anatomical measurement; provides an overview of biometric technology, presents a focus on biometric systems and true systems integration, examines the concept of identity management, and predicts future trends; investigates performance issues in biometric systems, the management and security of biometric data, and the impact of mobile devices on biometrics technology; explains the equivalence of performance across operational nodes, introducing the APEX system; considers the legal, political and societal factors of biometric technology, in addition to user psychology and other human factors.
A natural evolution of statistical signal processing, in connection with the progressive increase in computational power, has been exploiting higher-order information. Thus, high-order spectral analysis and nonlinear adaptive filtering have received the attention of many researchers. One of the most successful techniques for non-linear processing of data with complex non-Gaussian distributions is the independent component analysis mixture modelling (ICAMM). This thesis defines a novel formalism for pattern recognition and classification based on ICAMM, which unifies a certain number of pattern recognition tasks allowing generalization. The versatile and powerful framework developed in this work can deal with data obtained from quite different areas, such as image processing, impact-echo testing, cultural heritage, hypnograms analysis, web-mining and might therefore be employed to solve many different real-world problems.
This book constitutes the refereed proceedings of the 7th International Conference on Artificial General Intelligence, AGI 2014, held in Quebec City, QC, Canada, in August 2014. The 22 papers and 8 posters were carefully reviewed and selected from 65 submissions. Researchers have recognized the necessity of returning to the original goals of the field by treating intelligence as a whole. Increasingly, there is a call for a transition back to confronting the more difficult issues of "human-level intelligence" and more broadly artificial general intelligence. AGI research differs from the ordinary AI research by stressing on the versatility and wholeness of intelligence and by carrying out the engineering practice according to an outline of a system comparable to the human mind in a certain sense. The AGI conference series has played and continues to play, a significant role in this resurgence of research on artificial intelligence in the deeper, original sense of the term of "artificial intelligence". The conferences encourage interdisciplinary research based on different understandings of intelligence and exploring different approaches.
This book constitutes the refereed proceedings of the 10th International Symposium on Bioinformatics Research and Applications, ISBRA 2014, held in Zhangjiajie, China, in June 2014. The 33 revised full papers and 31 one-page abstracts included in this volume were carefully reviewed and selected from 119 submissions. The papers cover a wide range of topics in bioinformatics and computational biology and their applications including the development of experimental or commercial systems.
This book comprises chapters on key problems in machine learning and signal processing arenas. The contents of the book are a result of a 2014 Workshop on Machine Intelligence and Signal Processing held at the Indraprastha Institute of Information Technology. Traditionally, signal processing and machine learning were considered to be separate areas of research. However in recent times the two communities are getting closer. In a very abstract fashion, signal processing is the study of operator design. The contributions of signal processing had been to device operators for restoration, compression, etc. Applied Mathematicians were more interested in operator analysis. Nowadays signal processing research is gravitating towards operator learning - instead of designing operators based on heuristics (for example wavelets), the trend is to learn these operators (for example dictionary learning). And thus, the gap between signal processing and machine learning is fast converging. The 2014 Workshop on Machine Intelligence and Signal Processing was one of the few unique events that are focused on the convergence of the two fields. The book is comprised of chapters based on the top presentations at the workshop. This book has three chapters on various topics of biometrics - two are on face detection and one on iris recognition; all from top researchers in their field. There are four chapters on different biomedical signal / image processing problems. Two of these are on retinal vessel classification and extraction; one on biomedical signal acquisition and the fourth one on region detection. There are three chapters on data analysis - a topic gaining immense popularity in industry and academia. One of these shows a novel use of compressed sensing in missing sales data interpolation. Another chapter is on spam detection and the third one is on simple one-shot movie rating prediction. Four other chapters cover various cutting edge miscellaneous topics on character recognition, software effort prediction, speech recognition and non-linear sparse recovery. The contents of this book will prove useful to researchers, professionals and students in the domains of machine learning and signal processing.
Fuzzy classifiers are important tools in exploratory data analysis, which is a vital set of methods used in various engineering, scientific and business applications. Fuzzy classifiers use fuzzy rules and do not require assumptions common to statistical classification. Rough set theory is useful when data sets are incomplete. It defines a formal approximation of crisp sets by providing the lower and the upper approximation of the original set. Systems based on rough sets have natural ability to work on such data and incomplete vectors do not have to be preprocessed before classification. To achieve better performance than existing machine learning systems, fuzzy classifiers and rough sets can be combined in ensembles. Such ensembles consist of a finite set of learning models, usually weak learners. The present book discusses the three aforementioned fields - fuzzy systems, rough sets and ensemble techniques. As the trained ensemble should represent a single hypothesis, a lot of attention is placed on the possibility to combine fuzzy rules from fuzzy systems being members of classification ensemble. Furthermore, an emphasis is placed on ensembles that can work on incomplete data, thanks to rough set theory. .
This groundbreaking text examines the problem of user authentication from a completely new viewpoint. Rather than describing the requirements, technologies and implementation issues of designing point-of-entry authentication, the book introduces and investigates the technological requirements of implementing transparent user authentication - where authentication credentials are captured during a user's normal interaction with a system. This approach would transform user authentication from a binary point-of-entry decision to a continuous identity confidence measure. Topics and features: discusses the need for user authentication; reviews existing authentication approaches; introduces novel behavioural biometrics techniques; examines the wider system-specific issues with designing large-scale multimodal authentication systems; concludes with a look to the future of user authentication.
Biometric recognition, or simply biometrics, is the science of establishing the identity of a person based on physical or behavioral attributes. It is a rapidly evolving field with applications ranging from securely accessing one's computer to gaining entry into a country. While the deployment of large-scale biometric systems in both commercial and government applications has increased the public awareness of this technology, "Introduction to Biometrics" is the first textbook to introduce the fundamentals of Biometrics to undergraduate/graduate students. The three commonly used modalities in the biometrics field, namely, fingerprint, face, and iris are covered in detail in this book. Few other modalities like hand geometry, ear, and gait are also discussed briefly along with advanced topics such as multibiometric systems and security of biometric systems. Exercises for each chapter will be available on the book website to help students gain a better understanding of the topics and obtain practical experience in designing computer programs for biometric applications. These can be found at: http://www.csee.wvu.edu/~ross/BiometricsTextBook/. Designed for undergraduate and graduate students in computer science and electrical engineering, "Introduction to Biometrics" is also suitable for researchers and biometric and computer security professionals.
The amount of data medical databases doubles every 20 months, and physicians are at a loss to analyze them. Also, traditional data analysis has difficulty to identify outliers and patterns in big data and data with multiple exposure / outcome variables and analysis-rules for surveys and questionnaires, currently common methods of data collection, are, essentially, missing. Consequently, proper data-based health decisions will soon be impossible. Obviously, it is time that medical and health professionals mastered their reluctance to use machine learning methods and this was the main incentive for the authors to complete a series of three textbooks entitled "Machine Learning in Medicine Part One, Two and Three, Springer Heidelberg Germany, 2012-2013", describing in a nonmathematical way over sixty machine learning methodologies, as available in SPSS statistical software and other major software programs. Although well received, it came to our attention that physicians and students often lacked time to read the entire books, and requested a small book, without background information and theoretical discussions and highlighting technical details. For this reason we produced a 100 page cookbook, entitled "Machine Learning in Medicine - Cookbook One", with data examples available at extras.springer.com for self-assessment and with reference to the above textbooks for background information. Already at the completion of this cookbook we came to realize, that many essential methods were not covered. The current volume, entitled "Machine Learning in Medicine - Cookbook Two" is complementary to the first and also intended for providing a more balanced view of the field and thus, as a must-read not only for physicians and students, but also for any one involved in the process and progress of health and health care. Similarly to Machine Learning in Medicine - Cookbook One, the current work will describe stepwise analyses of over twenty machine learning methods, that are, likewise, based on the three major machine learning methodologies: Cluster methodologies (Chaps. 1-3) Linear methodologies (Chaps. 4-11) Rules methodologies (Chaps. 12-20) In extras.springer.com the data files of the examples are given, as well as XML (Extended Mark up Language), SPS (Syntax) and ZIP (compressed) files for outcome predictions in future patients. In addition to condensed versions of the methods, fully described in the above three textbooks, an introduction is given to SPSS Modeler (SPSS' data mining workbench) in the Chaps. 15, 18, 19, while improved statistical methods like various automated analyses and Monte Carlo simulation models are in the Chaps. 1, 5, 7 and 8. We should emphasize that all of the methods described have been successfully applied in practice by the authors, both of them professors in applied statistics and machine learning at the European Community College of Pharmaceutical Medicine in Lyon France. We recommend the current work not only as a training companion to investigators and students, because of plenty of step by step analyses given, but also as a brief introductory text to jaded clinicians new to the methods. For the latter purpose, background and theoretical information have been replaced with the appropriate references to the above textbooks, while single sections addressing "general purposes", "main scientific questions" and "conclusions" are given in place. Finally, we will demonstrate that modern machine learning performs sometimes better than traditional statistics does. Machine learning may have little options for adjusting confounding and interaction, but you can add propensity scores and interaction variables to almost any machine learning method.
It has been traditional in phonetic research to characterize monophthongs using a set of static formant frequencies, i.e., formant frequencies taken from a single time-point in the vowel or averaged over the time-course of the vowel. However, over the last twenty years a growing body of research has demonstrated that, at least for a number of dialects of North American English, vowels which are traditionally described as monophthongs often have substantial spectral change. Vowel inherent spectral change has been observed in speakers' productions, and has also been found to have a substantial effect on listeners' perception. In terms of acoustics, the traditional categorical distinction between monophthongs and diphthongs can be replaced by a gradient description of dynamic spectral patterns. This book includes chapters addressing various aspects of vowel inherent spectral change (VISC), including theoretical and experimental studies of the perceptually relevant aspects of VISC, the relationship between articulation (vocal-tract trajectories) and VISC, historical changes related VISC, cross-dialect, cross-language, and cross-age-group comparisons of VISC, the effects of VISC on second-language speech learning, and the use of VISC in forensic voice comparison.
Consumer electronics (CE) devices, providing multimedia entertainment and enabling communication, have become ubiquitous in daily life. However, consumer interaction with such equipment currently requires the use of devices such as remote controls and keyboards, which are often inconvenient, ambiguous and non-interactive. An important challenge for the modern CE industry is the design of user interfaces for CE products that enable interactions which are natural, intuitive and fun. As many CE products are supplied with microphones and cameras, the exploitation of both audio and visual information for interactive multimedia is a growing field of research. Collecting together contributions from an international selection of experts, including leading researchers in industry, this unique text presents the latest advances in applications of multimedia interaction and user interfaces for consumer electronics. Covering issues of both multimedia content analysis and human-machine interaction, the book examines a wide range of techniques from computer vision, machine learning, audio and speech processing, communications, artificial intelligence and media technology. Topics and features: introduces novel computationally efficient algorithms to extract semantically meaningful audio-visual events; investigates modality allocation in intelligent multimodal presentation systems, taking into account the cognitive impacts of modality on human information processing; provides an overview on gesture control technologies for CE; presents systems for natural human-computer interaction, virtual content insertion, and human action retrieval; examines techniques for 3D face pose estimation, physical activity recognition, and video summary quality evaluation; discusses the features that characterize the new generation of CE and examines how web services can be integrated with CE products for improved user experience. This book is an essential resource for researchers and practitioners from both academia and industry working in areas of multimedia analysis, human-computer interaction and interactive user interfaces. Graduate students studying computer vision, pattern recognition and multimedia will also find this a useful reference.
Cross disciplinary biometric systems help boost the performance of the conventional systems. Not only is the recognition accuracy significantly improved, but also the robustness of the systems is greatly enhanced in the challenging environments, such as varying illumination conditions. By leveraging the cross disciplinary technologies, face recognition systems, fingerprint recognition systems, iris recognition systems, as well as image search systems all benefit in terms of recognition performance. Take face recognition for an example, which is not only the most natural way human beings recognize the identity of each other, but also the least privacy-intrusive means because people show their face publicly every day. Face recognition systems display superb performance when they capitalize on the innovative ideas across color science, mathematics, and computer science (e.g., pattern recognition, machine learning, and image processing). The novel ideas lead to the development of new color models and effective color features in color science; innovative features from wavelets and statistics, and new kernel methods and novel kernel models in mathematics; new discriminant analysis frameworks, novel similarity measures, and new image analysis methods, such as fusing multiple image features from frequency domain, spatial domain, and color domain in computer science; as well as system design, new strategies for system integration, and different fusion strategies, such as the feature level fusion, decision level fusion, and new fusion strategies with novel similarity measures. |
You may like...
Optimization of Manufacturing Systems…
Yingfeng Zhang, Fei Tao
Paperback
Innovations in the Industrial Internet…
Sam Goundar, J Avanija, …
Hardcover
R6,251
Discovery Miles 62 510
Sensor Network Methodologies for Smart…
Salahddine Krit, Valentina Emilia Balas, …
Hardcover
R5,353
Discovery Miles 53 530
Services Computing for Language…
Yohei Murakami, Donghui Lin, …
Hardcover
|