![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Artificial intelligence > Machine learning
A thought-provoking look at statistical learning theory and its role in understanding human learning and inductive reasoning A joint endeavor from leading researchers in the fields of philosophy and electrical engineering, "An Elementary Introduction to Statistical Learning Theory" is a comprehensive and accessible primer on the rapidly evolving fields of statistical pattern recognition and statistical learning theory. Explaining these areas at a level and in a way that is not often found in other books on the topic, the authors present the basic theory behind contemporary machine learning and uniquely utilize its foundations as a framework for philosophical thinking about inductive inference. Promoting the fundamental goal of statistical learning, knowing what is achievable and what is not, this book demonstrates the value of a systematic methodology when used along with the needed techniques for evaluating the performance of a learning system. First, an introduction to machine learning is presented that includes brief discussions of applications such as image recognition, speech recognition, medical diagnostics, and statistical arbitrage. To enhance accessibility, two chapters on relevant aspects of probability theory are provided. Subsequent chapters feature coverage of topics such as the pattern recognition problem, optimal Bayes decision rule, the nearest neighbor rule, kernel rules, neural networks, support vector machines, and boosting. Appendices throughout the book explore the relationship between the discussed material and related topics from mathematics, philosophy, psychology, and statistics, drawing insightful connections between problems in these areas and statistical learning theory. All chapters conclude with a summary section, a set of practice questions, and a reference sections that supplies historical notes and additional resources for further study. "An Elementary Introduction to Statistical Learning Theory" is an excellent book for courses on statistical learning theory, pattern recognition, and machine learning at the upper-undergraduate and graduate levels. It also serves as an introductory reference for researchers and practitioners in the fields of engineering, computer science, philosophy, and cognitive science that would like to further their knowledge of the topic.
The ability to learn from experience is a fundamental requirement for intelligence. One of the most basic characteristics of human intelligence is that people can learn from problem solving, so that they become more adept at solving problems in a given domain as they gain experience. This book investigates how computers may be programmed so that they too can learn from experience. Specifically, the aim is to take a very general, but inefficient, problem solving system and train it on a set of problems from a given domain, so that it can transform itself into a specialized, efficient problem solver for that domain. on a knowledge-intensive Recently there has been considerable progress made learning approach, explanation-based learning (EBL), that brings us closer to this possibility. As demonstrated in this book, EBL can be used to analyze a problem solving episode in order to acquire control knowledge. Control knowledge guides the problem solver's search by indicating the best alternatives to pursue at each choice point. An EBL system can produce domain specific control knowledge by explaining why the choices made during a problem solving episode were, or were not, appropriate.
Digital Image Enhancement and Reconstruction: Techniques and Applications explores different concepts and techniques used for the enhancement as well as reconstruction of low-quality images. Most real-life applications require good quality images to gain maximum performance, however, the quality of the images captured in real-world scenarios is often very unsatisfactory. Most commonly, images are noisy, blurry, hazy, tiny, and hence need to pass through image enhancement and/or reconstruction algorithms before they can be processed by image analysis applications. This book comprehensively explores application-specific enhancement and reconstruction techniques including satellite image enhancement, face hallucination, low-resolution face recognition, medical image enhancement and reconstruction, reconstruction of underwater images, text image enhancement, biometrics, etc. Chapters will present a detailed discussion of the challenges faced in handling each particular kind of image, analysis of the best available solutions, and an exploration of applications and future directions. The book provides readers with a deep dive into denoising, dehazing, super-resolution, and use of soft computing across a range of engineering applications.
Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation is devoted to a new paradigm for evolutionary computation, named estimation of distribution algorithms (EDAs). This new class of algorithms generalizes genetic algorithms by replacing the crossover and mutation operators with learning and sampling from the probability distribution of the best individuals of the population at each iteration of the algorithm. Working in such a way, the relationships between the variables involved in the problem domain are explicitly and effectively captured and exploited. This text constitutes the first compilation and review of the techniques and applications of this new tool for performing evolutionary computation. Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation is clearly divided into three parts. Part I is dedicated to the foundations of EDAs. In this part, after introducing some probabilistic graphical models - Bayesian and Gaussian networks - a review of existing EDA approaches is presented, as well as some new methods based on more flexible probabilistic graphical models. A mathematical modeling of discrete EDAs is also presented. Part II covers several applications of EDAs in some classical optimization problems: the travelling salesman problem, the job scheduling problem, and the knapsack problem. EDAs are also applied to the optimization of some well-known combinatorial and continuous functions. Part III presents the application of EDAs to solve some problems that arise in the machine learning field: feature subset selection, feature weighting in K-NN classifiers, rule induction, partial abductive inference in Bayesian networks, partitional clustering, and the search for optimal weights in artificial neural networks. Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation is a useful and interesting tool for researchers working in the field of evolutionary computation and for engineers who face real-world optimization problems. This book may also be used by graduate students and researchers in computer science. ... I urge those who are interested in EDAs to study this well-crafted book today.' David E. Goldberg, University of Illinois Champaign-Urbana.
Two key words for mechanical engineering in the future are Micro and Intelligence. It is weIl known that the leadership in the intelligence technology is a marter of vital importance for the future status of industrial society, and thus national research projects for intelligent materials, structures and machines have started not only in advanced countries, but also in developing countries. Materials and structures which have self-sensing, diagnosis and actuating systems, are called intelligent or smart, and are of growing research interest in the world. In this situation, the IUT AM symposium on Dynamics 0/ Advanced Materials and Smart Structures was a timely one. Smart materials and structures are those equipped with sensors and actuators to achieve their designed performance in achanging environment. They have complex structural properties and mechanical responses. Many engineering problems, such as interface and edge phenomena, mechanical and electro-magnetic interaction/coupling and sensing, actuating and control techniques, arise in the development ofintelligent structures. Due to the multi-disciplinary nature ofthese problems, all ofthe classical sciences and technologies, such as applied mathematics, material science, solid and fluid mechanics, control techniques and others must be assembled and used to solve them. IUTAM weIl understands the importance ofthis emerging technology. An IUTAM symposium on Smart Structures and Structronic Systems (Chaired by U.
This is the first study of Boko Haram that brings advanced data-driven, machine learning models to both learn models capable of predicting a wide range of attacks carried out by Boko Haram, as well as develop data-driven policies to shape Boko Haram's behavior and reduce attacks by them. This book also identifies conditions that predict sexual violence, suicide bombings and attempted bombings, abduction, arson, looting, and targeting of government officials and security installations. After reducing Boko Haram's history to a spreadsheet containing monthly information about different types of attacks and different circumstances prevailing over a 9 year period, this book introduces Temporal Probabilistic (TP) rules that can be automatically learned from data and are easy to explain to policy makers and security experts. This book additionally reports on over 1 year of forecasts made using the model in order to validate predictive accuracy. It also introduces a policy computation method to rein in Boko Haram's attacks. Applied machine learning researchers, machine learning experts and predictive modeling experts agree that this book is a valuable learning asset. Counter-terrorism experts, national and international security experts, public policy experts and Africa experts will also agree this book is a valuable learning tool.
This book honours the outstanding contributions of Vladimir Vapnik, a rare example of a scientist for whom the following statements hold true simultaneously: his work led to the inception of a new field of research, the theory of statistical learning and empirical inference; he has lived to see the field blossom; and he is still as active as ever. He started analyzing learning algorithms in the 1960s and he invented the first version of the generalized portrait algorithm. He later developed one of the most successful methods in machine learning, the support vector machine (SVM) - more than just an algorithm, this was a new approach to learning problems, pioneering the use of functional analysis and convex optimization in machine learning. Part I of this book contains three chapters describing and witnessing some of Vladimir Vapnik's contributions to science. In the first chapter, Leon Bottou discusses the seminal paper published in 1968 by Vapnik and Chervonenkis that lay the foundations of statistical learning theory, and the second chapter is an English-language translation of that original paper. In the third chapter, Alexey Chervonenkis presents a first-hand account of the early history of SVMs and valuable insights into the first steps in the development of the SVM in the framework of the generalised portrait method. The remaining chapters, by leading scientists in domains such as statistics, theoretical computer science, and mathematics, address substantial topics in the theory and practice of statistical learning theory, including SVMs and other kernel-based methods, boosting, PAC-Bayesian theory, online and transductive learning, loss functions, learnable function classes, notions of complexity for function classes, multitask learning, and hypothesis selection. These contributions include historical and context notes, short surveys, and comments on future research directions. This book will be of interest to researchers, engineers, and graduate students engaged with all aspects of statistical learning.
This thesis discusses the privacy issues in speech-based applications such as biometric authentication, surveillance, and external speech processing services. Author Manas A. Pathak presents solutions for privacy-preserving speech processing applications such as speaker verification, speaker identification and speech recognition. The author also introduces some of the tools from cryptography and machine learning and current techniques for improving the efficiency and scalability of the presented solutions. Experiments with prototype implementations of the solutions for execution time and accuracy on standardized speech datasets are also included in the text. Using the framework proposed may now make it possible for a surveillance agency to listen for a known terrorist without being able to hear conversation from non-targeted, innocent civilians."
Human decision-making often transcends our formal models of "rationality." Designing intelligent agents that interact proficiently with people necessitates the modeling of human behavior and the prediction of their decisions. In this book, we explore the task of automatically predicting human decision-making and its use in designing intelligent human-aware automated computer systems of varying natures-from purely conflicting interaction settings (e.g., security and games) to fully cooperative interaction settings (e.g., autonomous driving and personal robotic assistants). We explore the techniques, algorithms, and empirical methodologies for meeting the challenges that arise from the above tasks and illustrate major benefits from the use of these computational solutions in real-world application domains such as security, negotiations, argumentative interactions, voting systems, autonomous driving, and games. The book presents both the traditional and classical methods as well as the most recent and cutting edge advances, providing the reader with a panorama of the challenges and solutions in predicting human decision-making.
Foundations of Genetic Algorithms, Volume 6 is the latest in a
series of books that records the prestigious Foundations of Genetic
Algorithms Workshops, sponsored and organised by the International
Society of Genetic Algorithms specifically to address theoretical
publications on genetic algorithms and classifier systems.
One of the most intriguing questions about the new computer technology that has appeared over the past few decades is whether we humans will ever be able to make computers learn. As is painfully obvious to even the most casual computer user, most current computers do not. Yet if we could devise learning techniques that enable computers to routinely improve their performance through experience, the impact would be enormous. The result would be an explosion of new computer applications that would suddenly become economically feasible (e. g. , personalized computer assistants that automatically tune themselves to the needs of individual users), and a dramatic improvement in the quality of current computer applications (e. g. , imagine an airline scheduling program that improves its scheduling method based on analyzing past delays). And while the potential economic impact of successful learning methods is sufficient reason to invest in research into machine learning, there is a second significant reason: studying machine learning helps us understand our own human learning abilities and disabilities, leading to the possibility of improved methods in education. While many open questions remain about the methods by which machines and humans might learn, significant progress has been made.
Quantum systems with many degrees of freedom are inherently difficult to describe and simulate quantitatively. The space of possible states is, in general, exponentially large in the number of degrees of freedom such as the number of particles it contains. Standard digital high-performance computing is generally too weak to capture all the necessary details, such that alternative quantum simulation devices have been proposed as a solution. Artificial neural networks, with their high non-local connectivity between the neuron degrees of freedom, may soon gain importance in simulating static and dynamical behavior of quantum systems. Particularly promising candidates are neuromorphic realizations based on analog electronic circuits which are being developed to capture, e.g., the functioning of biologically relevant networks. In turn, such neuromorphic systems may be used to measure and control real quantum many-body systems online. This thesis lays an important foundation for the realization of quantum simulations by means of neuromorphic hardware, for using quantum physics as an input to classical neural nets and, in turn, for using network results to be fed back to quantum systems. The necessary foundations on both sides, quantum physics and artificial neural networks, are described, providing a valuable reference for researchers from these different communities who need to understand the foundations of both.
One of the most intriguing questions about the new computer technology that has appeared over the past few decades is whether we humans will ever be able to make computers learn. As is painfully obvious to even the most casual computer user, most current computers do not. Yet if we could devise learning techniques that enable computers to routinely improve their performance through experience, the impact would be enormous. The result would be an explosion of new computer applications that would suddenly become economically feasible (e. g. , personalized computer assistants that automatically tune themselves to the needs of individual users), and a dramatic improvement in the quality of current computer applications (e. g. , imagine an airline scheduling program that improves its scheduling method based on analyzing past delays). And while the potential economic impact ofsuccessful learning methods is sufficient reason to invest in research into machine learning, there is a second significant reason: studying machine learning helps us understand our own human learning abilities and disabilities, leading to the possibility of improved methods in education. While many open questions remain aboutthe methods by which machines and humans might learn, significant progress has been made.
This book highlights the financial community's realization regarding the failure of corporate communication required for forensic professionals. This has led to structural weaknesses in areas such as flawed internal controls, poor corporate governance, and fraudulent financial statements. A vital need exists for the development of forensic accounting techniques, a reduction in external auditor deficiencies in fraud detection, and the use of cloud forensic audit to enhance corporate efficiency in fraud detection. This book discusses forensic accounting techniques and explores how forensic accountants add value while investigating claims & fraud. It will also highlight the corporate benefits of forensic accounting audit and the acceptance of this evidence in the court of law. The chapters will ultimately show the significance of forensic accounting audits and how research has developed in the field. By researching new ways, techniques, and methods for minimizing corporate damages, society can be greatly benefitted.
Both the way we look at data, through a DBMS, and the nature of data we ask a DBMS to manage have drastically evolved over the last decade, moving from text to images (and to sound to a lesser extent). Visual representations are used extensively within new user interfaces. Powerful visual approaches are being experimented for data manipulation, including the investigation of three dimensional display techniques. Similarly, sophisticated data visualization techniques are dramatically improving the understanding of the information extracted from a database. On the other hand, more and more applications use images as basic data or to enhance the quality and richness of data manipulation services. Image management has opened a wide area of new research topics in image understanding and analysis. The IFIP 2.6 Working Group on Databases strongly believes that a significant mutual enrichment is possible by confronting ideas, concepts and techniques supporting the work of researcher and practitioners in the two areas of visual interfaces to DBMS and DBMS management of visual data. For this reason, IFIP 2.6 has launched a series of conferences on Visual Database Systems. The first one has been held in Tokyo, 1989. VDB-2 was held in Budapest, 1991. This conference is the third in the series. As the preceding editions, the conference addresses researchers and practitioners active or interested in user interfaces, human-computer communication, knowledge representation and management, image processing and understanding, multimedia database techniques and computer vision.
The work described in this book was first presented at the Second Workshop on Genetic Programming, Theory and Practice, organized by the Center for the Study of Complex Systems at the University of Michigan, Ann Arbor, 13-15 May 2004. The goal of this workshop series is to promote the exchange of research results and ideas between those who focus on Genetic Programming (GP) theory and those who focus on the application of GP to various re- world problems. In order to facilitate these interactions, the number of talks and participants was small and the time for discussion was large. Further, participants were asked to review each other's chapters before the workshop. Those reviewer comments, as well as discussion at the workshop, are reflected in the chapters presented in this book. Additional information about the workshop, addendums to chapters, and a site for continuing discussions by participants and by others can be found at http: //cscs.umich.edu:8000/GPTP-20041. We thank all the workshop participants for making the workshop an exciting and productive three days. In particular we thank all the authors, without whose hard work and creative talents, neither the workshop nor the book would be possible. We also thank our keynote speakers Lawrence ("Dave") Davis of NuTech Solutions, Inc., Jordan Pollack of Brandeis University, and Richard Lenski of Michigan State University, who delivered three thought-provoking speeches that inspired a great deal of discussion among the participants.
Algorithmic Learning in a Random World describes recent theoretical and experimental developments in building computable approximations to Kolmogorov's algorithmic notion of randomness. Based on these approximations, a new set of machine learning algorithms have been developed that can be used to make predictions and to estimate their confidence and credibility in high-dimensional spaces under the usual assumption that the data are independent and identically distributed (assumption of randomness). Another aim of this unique monograph is to outline some limits of predictions: The approach based on algorithmic theory of randomness allows for the proof of impossibility of prediction in certain situations. The book describes how several important machine learning problems, such as density estimation in high-dimensional spaces, cannot be solved if the only assumption is randomness.
Most machine learning research has been concerned with the development of systems that implememnt one type of inference within a single representational paradigm. Such systems, which can be called monostrategy learning systems, include those for empirical induction of decision trees or rules, explanation-based generalization, neural net learning from examples, genetic algorithm-based learning, and others. Monostrategy learning systems can be very effective and useful if learning problems to which they are applied are sufficiently narrowly defined. Many real-world applications, however, pose learning problems that go beyond the capability of monostrategy learning methods. In view of this, recent years have witnessed a growing interest in developing multistrategy systems, which integrate two or more inference types and/or paradigms within one learning system. Such multistrategy systems take advantage of the complementarity of different inference types or representational mechanisms. Therefore, they have a potential to be more versatile and more powerful than monostrategy systems. On the other hand, due to their greater complexity, their development is significantly more difficult and represents a new great challenge to the machine learning community. Multistrategy Learning contains contributions characteristic of the current research in this area.
PHILOSOPHY AND COGNITIVE SCIENCE: CATEGORIES, CONSCIOUSNESS, AND REASONING The individual man, since his separate existence is manifested only by ignorance and error, so far as he is anything apart from his fellows, and from what he and they are to be, is only a negation. Peirce, Some Consequences of Four Incapacities. 1868. For the second time the International Colloquium on Cognitive Science gathered at San Sebastian from May, 7-11, 1991 to discuss the following main topics: Knowledge of Categories Consciousness Reasoning and Interpretation Evolution, Biology, and Mind It is not an easy task to introduce in a few words the content of this volume. We have collected eleven invited papers presented at the Colloquium, which means the substantial part of it. Unfortunately, it has not been possible to include all the invited lectures of the meeting. Before sketching and showing the relevance of each paper, let us explain the reasons for having adopted the decision to organize each two years an international colloquium on Cognitive Science at Donostia (San Sebastian). First of all, Cognitive Science is a very active research area in the world, linking multidisciplinary efforts coming mostly from psychology, artificial intelligence, theoretical linguistics and neurobiology, and using more and more formal tools. We think that this new discipline lacks solid foundations, and in this sense philosophy, particularly knowledge theory, and logic must be called for.
Making Robots Smarter is a book about learning robots. It treats this topic based on the idea that the integration of sensing and action is the central issue. In the first part of the book, aspects of learning in execution and control are discussed. Methods for the automatic synthesis of controllers, for active sensing, for learning to enhance assembly, and for learning sensor-based navigation are presented. Since robots are not isolated but should serve us, the second part of the book discusses learning for human-robot interaction. Methods of learning understandable concepts for assembly, monitoring, and navigation are described as well as optimizing the implementation of such understandable concepts for a robot's real-time performance. In terms of the study of embodied intelligence, Making Robots Smarter asks how skills are acquired and where capabilities of execution and control come from. Can they be learned from examples or experience? What is the role of communication in the learning procedure? Whether we name it one way or the other, the methodological challenge is that of integrating learning capabilities into robots.
This book presents a modular and expandable technique in the rapidly emerging research area of automatic configuration and selection of the best algorithm for the instance at hand. The author presents the basic model behind ISAC and then details a number of modifications and practical applications. In particular, he addresses automated feature generation, offline algorithm configuration for portfolio generation, algorithm selection, adaptive solvers, online tuning, and parallelization. The author's related thesis was honorably mentioned (runner-up) for the ACP Dissertation Award in 2014, and this book includes some expanded sections and notes on recent developments. Additionally, the techniques described in this book have been successfully applied to a number of solvers competing in the SAT and MaxSAT International Competitions, winning a total of 18 gold medals between 2011 and 2014. The book will be of interest to researchers and practitioners in artificial intelligence, in particular in the area of machine learning and constraint programming.
Designing complex programs such as operating systems, compilers, filing systems, data base systems, etc. is an old ever lasting research area. Genetic programming is a relatively new promising and growing research area. Among other uses, it provides efficient tools to deal with hard problems by evolving creative and competitive solutions. Systems Programming is generally strewn with such hard problems. This book is devoted to reporting innovative and significant progress about the contribution of genetic programming in systems programming. The contributions of this book clearly demonstrate that genetic programming is very effective in solving hard and yet-open problems in systems programming. Followed by an introductory chapter, in the remaining contributed chapters, the reader can easily learn about systems where genetic programming can be applied successfully. These include but are not limited to, information security systems, compilers, data mining systems, stock market prediction systems, robots and automatic programming.
Ontology Learning for the Semantic Web explores techniques for
applying knowledge discovery techniques to different web data
sources (such as HTML documents, dictionaries, etc.), in order to
support the task of engineering and maintaining ontologies. The
approach of ontology learning proposed in Ontology Learning for the
Semantic Web includes a number of complementary disciplines that
feed in different types of unstructured and semi-structured data.
This data is necessary in order to support a semi-automatic
ontology engineering process.
This book embodies principles and applications of advanced soft computing approaches in engineering, healthcare and allied domains directed toward the researchers aspiring to learn and apply intelligent data analytics techniques. The first part covers AI, machine learning and data analytics tools and techniques and their applications to the class of several hospital and health real-life problems. In the later part, the applications of AI, ML and data analytics shall be covered over the wide variety of applications in hospital, health, engineering and/or applied sciences such as the clinical services, medical image analysis, management support, quality analysis, bioinformatics, device analysis and operations. The book presents knowledge of experts in the form of chapters with the objective to introduce the theme of intelligent data analytics and discusses associated theoretical applications. At last, it presents simulation codes for the problems included in the book for better understanding for beginners.
Independent Component Analysis (ICA) is a signal-processing method to extract independent sources given only observed data that are mixtures of the unknown sources. Recently, blind source separation by ICA has received considerable attention because of its potential signal-processing applications such as speech enhancement systems, telecommunications, medical signal-processing and several data mining issues. This book presents theories and applications of ICA and includes invaluable examples of several real-world applications. Based on theories in probabilistic models, information theory and artificial neural networks, several unsupervised learning algorithms are presented that can perform ICA. The seemingly different theories such as infomax, maximum likelihood estimation, negentropy maximization, nonlinear PCA, Bussgang algorithm and cumulant-based methods are reviewed and put in an information theoretic framework to unify several lines of ICA research. An algorithm is presented that is able to blindly separate mixed signals with sub- and super-Gaussian source distributions. The learning algorithms can be extended to filter systems, which allows the separation of voices recorded in a real environment (cocktail party problem). The ICA algorithm has been successfully applied to many biomedical signal-processing problems such as the analysis of electroencephalographic data and functional magnetic resonance imaging data. ICA applied to images results in independent image components that can be used as features in pattern classification problems such as visual lip-reading and face recognition systems. The ICA algorithm can furthermore be embedded in an expectation maximization framework for unsupervised classification. Independent Component Analysis: Theory and Applications is the first book to successfully address this fairly new and generally applicable method of blind source separation. It is essential reading for researchers and practitioners with an interest in ICA. |
![]() ![]() You may like...
Deep Learning for Chest Radiographs…
Yashvi Chandola, Jitendra Virmani, …
Paperback
R2,124
Discovery Miles 21 240
Foundation Models for Natural Language…
Gerhard PaaĆ, Sven Giesselbach
Hardcover
Statistical Modeling in Machine Learning…
Tilottama Goswami, G. R. Sinha
Paperback
R4,069
Discovery Miles 40 690
Machine Learning for Planetary Science
Joern Helbert, Mario D'Amore, …
Paperback
R3,500
Discovery Miles 35 000
Cyber-Physical System Solutions for…
Vanamoorthy Muthumanikandan, Anbalagan Bhuvaneswari, …
Hardcover
R7,369
Discovery Miles 73 690
Hardware Accelerator Systems for…
Shiho Kim, Ganesh Chandra Deka
Hardcover
R4,095
Discovery Miles 40 950
|