![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Artificial intelligence > Machine learning
This book honours the outstanding contributions of Vladimir Vapnik, a rare example of a scientist for whom the following statements hold true simultaneously: his work led to the inception of a new field of research, the theory of statistical learning and empirical inference; he has lived to see the field blossom; and he is still as active as ever. He started analyzing learning algorithms in the 1960s and he invented the first version of the generalized portrait algorithm. He later developed one of the most successful methods in machine learning, the support vector machine (SVM) - more than just an algorithm, this was a new approach to learning problems, pioneering the use of functional analysis and convex optimization in machine learning. Part I of this book contains three chapters describing and witnessing some of Vladimir Vapnik's contributions to science. In the first chapter, Leon Bottou discusses the seminal paper published in 1968 by Vapnik and Chervonenkis that lay the foundations of statistical learning theory, and the second chapter is an English-language translation of that original paper. In the third chapter, Alexey Chervonenkis presents a first-hand account of the early history of SVMs and valuable insights into the first steps in the development of the SVM in the framework of the generalised portrait method. The remaining chapters, by leading scientists in domains such as statistics, theoretical computer science, and mathematics, address substantial topics in the theory and practice of statistical learning theory, including SVMs and other kernel-based methods, boosting, PAC-Bayesian theory, online and transductive learning, loss functions, learnable function classes, notions of complexity for function classes, multitask learning, and hypothesis selection. These contributions include historical and context notes, short surveys, and comments on future research directions. This book will be of interest to researchers, engineers, and graduate students engaged with all aspects of statistical learning.
This thesis discusses the privacy issues in speech-based applications such as biometric authentication, surveillance, and external speech processing services. Author Manas A. Pathak presents solutions for privacy-preserving speech processing applications such as speaker verification, speaker identification and speech recognition. The author also introduces some of the tools from cryptography and machine learning and current techniques for improving the efficiency and scalability of the presented solutions. Experiments with prototype implementations of the solutions for execution time and accuracy on standardized speech datasets are also included in the text. Using the framework proposed may now make it possible for a surveillance agency to listen for a known terrorist without being able to hear conversation from non-targeted, innocent civilians."
One of the most intriguing questions about the new computer technology that has appeared over the past few decades is whether we humans will ever be able to make computers learn. As is painfully obvious to even the most casual computer user, most current computers do not. Yet if we could devise learning techniques that enable computers to routinely improve their performance through experience, the impact would be enormous. The result would be an explosion of new computer applications that would suddenly become economically feasible (e. g. , personalized computer assistants that automatically tune themselves to the needs of individual users), and a dramatic improvement in the quality of current computer applications (e. g. , imagine an airline scheduling program that improves its scheduling method based on analyzing past delays). And while the potential economic impact of successful learning methods is sufficient reason to invest in research into machine learning, there is a second significant reason: studying machine learning helps us understand our own human learning abilities and disabilities, leading to the possibility of improved methods in education. While many open questions remain about the methods by which machines and humans might learn, significant progress has been made.
This is the first study of Boko Haram that brings advanced data-driven, machine learning models to both learn models capable of predicting a wide range of attacks carried out by Boko Haram, as well as develop data-driven policies to shape Boko Haram's behavior and reduce attacks by them. This book also identifies conditions that predict sexual violence, suicide bombings and attempted bombings, abduction, arson, looting, and targeting of government officials and security installations. After reducing Boko Haram's history to a spreadsheet containing monthly information about different types of attacks and different circumstances prevailing over a 9 year period, this book introduces Temporal Probabilistic (TP) rules that can be automatically learned from data and are easy to explain to policy makers and security experts. This book additionally reports on over 1 year of forecasts made using the model in order to validate predictive accuracy. It also introduces a policy computation method to rein in Boko Haram's attacks. Applied machine learning researchers, machine learning experts and predictive modeling experts agree that this book is a valuable learning asset. Counter-terrorism experts, national and international security experts, public policy experts and Africa experts will also agree this book is a valuable learning tool.
This book highlights the financial community's realization regarding the failure of corporate communication required for forensic professionals. This has led to structural weaknesses in areas such as flawed internal controls, poor corporate governance, and fraudulent financial statements. A vital need exists for the development of forensic accounting techniques, a reduction in external auditor deficiencies in fraud detection, and the use of cloud forensic audit to enhance corporate efficiency in fraud detection. This book discusses forensic accounting techniques and explores how forensic accountants add value while investigating claims & fraud. It will also highlight the corporate benefits of forensic accounting audit and the acceptance of this evidence in the court of law. The chapters will ultimately show the significance of forensic accounting audits and how research has developed in the field. By researching new ways, techniques, and methods for minimizing corporate damages, society can be greatly benefitted.
One of the most intriguing questions about the new computer technology that has appeared over the past few decades is whether we humans will ever be able to make computers learn. As is painfully obvious to even the most casual computer user, most current computers do not. Yet if we could devise learning techniques that enable computers to routinely improve their performance through experience, the impact would be enormous. The result would be an explosion of new computer applications that would suddenly become economically feasible (e. g. , personalized computer assistants that automatically tune themselves to the needs of individual users), and a dramatic improvement in the quality of current computer applications (e. g. , imagine an airline scheduling program that improves its scheduling method based on analyzing past delays). And while the potential economic impact ofsuccessful learning methods is sufficient reason to invest in research into machine learning, there is a second significant reason: studying machine learning helps us understand our own human learning abilities and disabilities, leading to the possibility of improved methods in education. While many open questions remain aboutthe methods by which machines and humans might learn, significant progress has been made.
Both the way we look at data, through a DBMS, and the nature of data we ask a DBMS to manage have drastically evolved over the last decade, moving from text to images (and to sound to a lesser extent). Visual representations are used extensively within new user interfaces. Powerful visual approaches are being experimented for data manipulation, including the investigation of three dimensional display techniques. Similarly, sophisticated data visualization techniques are dramatically improving the understanding of the information extracted from a database. On the other hand, more and more applications use images as basic data or to enhance the quality and richness of data manipulation services. Image management has opened a wide area of new research topics in image understanding and analysis. The IFIP 2.6 Working Group on Databases strongly believes that a significant mutual enrichment is possible by confronting ideas, concepts and techniques supporting the work of researcher and practitioners in the two areas of visual interfaces to DBMS and DBMS management of visual data. For this reason, IFIP 2.6 has launched a series of conferences on Visual Database Systems. The first one has been held in Tokyo, 1989. VDB-2 was held in Budapest, 1991. This conference is the third in the series. As the preceding editions, the conference addresses researchers and practitioners active or interested in user interfaces, human-computer communication, knowledge representation and management, image processing and understanding, multimedia database techniques and computer vision.
The work described in this book was first presented at the Second Workshop on Genetic Programming, Theory and Practice, organized by the Center for the Study of Complex Systems at the University of Michigan, Ann Arbor, 13-15 May 2004. The goal of this workshop series is to promote the exchange of research results and ideas between those who focus on Genetic Programming (GP) theory and those who focus on the application of GP to various re- world problems. In order to facilitate these interactions, the number of talks and participants was small and the time for discussion was large. Further, participants were asked to review each other's chapters before the workshop. Those reviewer comments, as well as discussion at the workshop, are reflected in the chapters presented in this book. Additional information about the workshop, addendums to chapters, and a site for continuing discussions by participants and by others can be found at http: //cscs.umich.edu:8000/GPTP-20041. We thank all the workshop participants for making the workshop an exciting and productive three days. In particular we thank all the authors, without whose hard work and creative talents, neither the workshop nor the book would be possible. We also thank our keynote speakers Lawrence ("Dave") Davis of NuTech Solutions, Inc., Jordan Pollack of Brandeis University, and Richard Lenski of Michigan State University, who delivered three thought-provoking speeches that inspired a great deal of discussion among the participants.
Quantum systems with many degrees of freedom are inherently difficult to describe and simulate quantitatively. The space of possible states is, in general, exponentially large in the number of degrees of freedom such as the number of particles it contains. Standard digital high-performance computing is generally too weak to capture all the necessary details, such that alternative quantum simulation devices have been proposed as a solution. Artificial neural networks, with their high non-local connectivity between the neuron degrees of freedom, may soon gain importance in simulating static and dynamical behavior of quantum systems. Particularly promising candidates are neuromorphic realizations based on analog electronic circuits which are being developed to capture, e.g., the functioning of biologically relevant networks. In turn, such neuromorphic systems may be used to measure and control real quantum many-body systems online. This thesis lays an important foundation for the realization of quantum simulations by means of neuromorphic hardware, for using quantum physics as an input to classical neural nets and, in turn, for using network results to be fed back to quantum systems. The necessary foundations on both sides, quantum physics and artificial neural networks, are described, providing a valuable reference for researchers from these different communities who need to understand the foundations of both.
Most machine learning research has been concerned with the development of systems that implememnt one type of inference within a single representational paradigm. Such systems, which can be called monostrategy learning systems, include those for empirical induction of decision trees or rules, explanation-based generalization, neural net learning from examples, genetic algorithm-based learning, and others. Monostrategy learning systems can be very effective and useful if learning problems to which they are applied are sufficiently narrowly defined. Many real-world applications, however, pose learning problems that go beyond the capability of monostrategy learning methods. In view of this, recent years have witnessed a growing interest in developing multistrategy systems, which integrate two or more inference types and/or paradigms within one learning system. Such multistrategy systems take advantage of the complementarity of different inference types or representational mechanisms. Therefore, they have a potential to be more versatile and more powerful than monostrategy systems. On the other hand, due to their greater complexity, their development is significantly more difficult and represents a new great challenge to the machine learning community. Multistrategy Learning contains contributions characteristic of the current research in this area.
Algorithmic Learning in a Random World describes recent theoretical and experimental developments in building computable approximations to Kolmogorov's algorithmic notion of randomness. Based on these approximations, a new set of machine learning algorithms have been developed that can be used to make predictions and to estimate their confidence and credibility in high-dimensional spaces under the usual assumption that the data are independent and identically distributed (assumption of randomness). Another aim of this unique monograph is to outline some limits of predictions: The approach based on algorithmic theory of randomness allows for the proof of impossibility of prediction in certain situations. The book describes how several important machine learning problems, such as density estimation in high-dimensional spaces, cannot be solved if the only assumption is randomness.
PHILOSOPHY AND COGNITIVE SCIENCE: CATEGORIES, CONSCIOUSNESS, AND REASONING The individual man, since his separate existence is manifested only by ignorance and error, so far as he is anything apart from his fellows, and from what he and they are to be, is only a negation. Peirce, Some Consequences of Four Incapacities. 1868. For the second time the International Colloquium on Cognitive Science gathered at San Sebastian from May, 7-11, 1991 to discuss the following main topics: Knowledge of Categories Consciousness Reasoning and Interpretation Evolution, Biology, and Mind It is not an easy task to introduce in a few words the content of this volume. We have collected eleven invited papers presented at the Colloquium, which means the substantial part of it. Unfortunately, it has not been possible to include all the invited lectures of the meeting. Before sketching and showing the relevance of each paper, let us explain the reasons for having adopted the decision to organize each two years an international colloquium on Cognitive Science at Donostia (San Sebastian). First of all, Cognitive Science is a very active research area in the world, linking multidisciplinary efforts coming mostly from psychology, artificial intelligence, theoretical linguistics and neurobiology, and using more and more formal tools. We think that this new discipline lacks solid foundations, and in this sense philosophy, particularly knowledge theory, and logic must be called for.
Digital Image Enhancement and Reconstruction: Techniques and Applications explores different concepts and techniques used for the enhancement as well as reconstruction of low-quality images. Most real-life applications require good quality images to gain maximum performance, however, the quality of the images captured in real-world scenarios is often very unsatisfactory. Most commonly, images are noisy, blurry, hazy, tiny, and hence need to pass through image enhancement and/or reconstruction algorithms before they can be processed by image analysis applications. This book comprehensively explores application-specific enhancement and reconstruction techniques including satellite image enhancement, face hallucination, low-resolution face recognition, medical image enhancement and reconstruction, reconstruction of underwater images, text image enhancement, biometrics, etc. Chapters will present a detailed discussion of the challenges faced in handling each particular kind of image, analysis of the best available solutions, and an exploration of applications and future directions. The book provides readers with a deep dive into denoising, dehazing, super-resolution, and use of soft computing across a range of engineering applications.
Making Robots Smarter is a book about learning robots. It treats this topic based on the idea that the integration of sensing and action is the central issue. In the first part of the book, aspects of learning in execution and control are discussed. Methods for the automatic synthesis of controllers, for active sensing, for learning to enhance assembly, and for learning sensor-based navigation are presented. Since robots are not isolated but should serve us, the second part of the book discusses learning for human-robot interaction. Methods of learning understandable concepts for assembly, monitoring, and navigation are described as well as optimizing the implementation of such understandable concepts for a robot's real-time performance. In terms of the study of embodied intelligence, Making Robots Smarter asks how skills are acquired and where capabilities of execution and control come from. Can they be learned from examples or experience? What is the role of communication in the learning procedure? Whether we name it one way or the other, the methodological challenge is that of integrating learning capabilities into robots.
Designing complex programs such as operating systems, compilers, filing systems, data base systems, etc. is an old ever lasting research area. Genetic programming is a relatively new promising and growing research area. Among other uses, it provides efficient tools to deal with hard problems by evolving creative and competitive solutions. Systems Programming is generally strewn with such hard problems. This book is devoted to reporting innovative and significant progress about the contribution of genetic programming in systems programming. The contributions of this book clearly demonstrate that genetic programming is very effective in solving hard and yet-open problems in systems programming. Followed by an introductory chapter, in the remaining contributed chapters, the reader can easily learn about systems where genetic programming can be applied successfully. These include but are not limited to, information security systems, compilers, data mining systems, stock market prediction systems, robots and automatic programming.
This book presents a modular and expandable technique in the rapidly emerging research area of automatic configuration and selection of the best algorithm for the instance at hand. The author presents the basic model behind ISAC and then details a number of modifications and practical applications. In particular, he addresses automated feature generation, offline algorithm configuration for portfolio generation, algorithm selection, adaptive solvers, online tuning, and parallelization. The author's related thesis was honorably mentioned (runner-up) for the ACP Dissertation Award in 2014, and this book includes some expanded sections and notes on recent developments. Additionally, the techniques described in this book have been successfully applied to a number of solvers competing in the SAT and MaxSAT International Competitions, winning a total of 18 gold medals between 2011 and 2014. The book will be of interest to researchers and practitioners in artificial intelligence, in particular in the area of machine learning and constraint programming.
Ontology Learning for the Semantic Web explores techniques for
applying knowledge discovery techniques to different web data
sources (such as HTML documents, dictionaries, etc.), in order to
support the task of engineering and maintaining ontologies. The
approach of ontology learning proposed in Ontology Learning for the
Semantic Web includes a number of complementary disciplines that
feed in different types of unstructured and semi-structured data.
This data is necessary in order to support a semi-automatic
ontology engineering process.
Independent Component Analysis (ICA) is a signal-processing method to extract independent sources given only observed data that are mixtures of the unknown sources. Recently, blind source separation by ICA has received considerable attention because of its potential signal-processing applications such as speech enhancement systems, telecommunications, medical signal-processing and several data mining issues. This book presents theories and applications of ICA and includes invaluable examples of several real-world applications. Based on theories in probabilistic models, information theory and artificial neural networks, several unsupervised learning algorithms are presented that can perform ICA. The seemingly different theories such as infomax, maximum likelihood estimation, negentropy maximization, nonlinear PCA, Bussgang algorithm and cumulant-based methods are reviewed and put in an information theoretic framework to unify several lines of ICA research. An algorithm is presented that is able to blindly separate mixed signals with sub- and super-Gaussian source distributions. The learning algorithms can be extended to filter systems, which allows the separation of voices recorded in a real environment (cocktail party problem). The ICA algorithm has been successfully applied to many biomedical signal-processing problems such as the analysis of electroencephalographic data and functional magnetic resonance imaging data. ICA applied to images results in independent image components that can be used as features in pattern classification problems such as visual lip-reading and face recognition systems. The ICA algorithm can furthermore be embedded in an expectation maximization framework for unsupervised classification. Independent Component Analysis: Theory and Applications is the first book to successfully address this fairly new and generally applicable method of blind source separation. It is essential reading for researchers and practitioners with an interest in ICA.
To-date computers are supposed to store and exploit knowledge. At least that is one of the aims of research fields such as Artificial Intelligence and Information Systems. However, the problem is to understand what knowledge means, to find ways of representing knowledge, and to specify automated machineries that can extract useful information from stored knowledge. Knowledge is something people have in their mind, and which they can express through natural language. Knowl edge is acquired not only from books, but also from observations made during experiments; in other words, from data. Changing data into knowledge is not a straightforward task. A set of data is generally disorganized, contains useless details, although it can be incomplete. Knowledge is just the opposite: organized (e.g. laying bare dependencies, or classifications), but expressed by means of a poorer language, i.e. pervaded by imprecision or even vagueness, and assuming a level of granularity. One may say that knowledge is summarized and organized data - at least the kind of knowledge that computers can store."
Although good devices exist for presenting visual and auditory sensations, there has yet to be a device for presenting olfactory stimulus. Nevertheless, the area for smell presentation continues to evolve and smell presentation in multimedia is not unlikely in the future. Human Olfactory Displays and Interfaces: Odor Sensing and Presentation provides the opportunity to learn about olfactory displays and its odor reproduction. Covering the fundamental and latest research of sensors and sensing systems as well as presentation technique, this book is vital for researchers, students, and practitioners gaining knowledge in the fields of consumer electronics, communications, virtual realities, electronic instruments, and more.
Every mathematical discipline goes through three periods of development: the naive, the formal, and the critical. David Hilbert The goal of this book is to explain the principles that made support vector machines (SVMs) a successful modeling and prediction tool for a variety of applications. We try to achieve this by presenting the basic ideas of SVMs together with the latest developments and current research questions in a uni?ed style. In a nutshell, we identify at least three reasons for the success of SVMs: their ability to learn well with only a very small number of free parameters, their robustness against several types of model violations and outliers, and last but not least their computational e?ciency compared with several other methods. Although there are several roots and precursors of SVMs, these methods gained particular momentum during the last 15 years since Vapnik (1995, 1998) published his well-known textbooks on statistical learning theory with aspecialemphasisonsupportvectormachines. Sincethen, the?eldofmachine learninghaswitnessedintenseactivityinthestudyofSVMs, whichhasspread moreandmoretootherdisciplinessuchasstatisticsandmathematics. Thusit seems fair to say that several communities are currently working on support vector machines and on related kernel-based methods. Although there are many interactions between these communities, we think that there is still roomforadditionalfruitfulinteractionandwouldbegladifthistextbookwere found helpful in stimulating further research. Many of the results presented in this book have previously been scattered in the journal literature or are still under review. As a consequence, these results have been accessible only to a relativelysmallnumberofspecialists, sometimesprobablyonlytopeoplefrom one community but not the othe
This book embodies principles and applications of advanced soft computing approaches in engineering, healthcare and allied domains directed toward the researchers aspiring to learn and apply intelligent data analytics techniques. The first part covers AI, machine learning and data analytics tools and techniques and their applications to the class of several hospital and health real-life problems. In the later part, the applications of AI, ML and data analytics shall be covered over the wide variety of applications in hospital, health, engineering and/or applied sciences such as the clinical services, medical image analysis, management support, quality analysis, bioinformatics, device analysis and operations. The book presents knowledge of experts in the form of chapters with the objective to introduce the theme of intelligent data analytics and discusses associated theoretical applications. At last, it presents simulation codes for the problems included in the book for better understanding for beginners.
Since the introduction of genetic algorithms in the 1970s, an enormous number of articles together with several significant monographs and books have been published on this methodology. As a result, genetic algorithms have made a major contribution to optimization, adaptation, and learning in a wide variety of unexpected fields. Over the years, many excellent books in genetic algorithm optimization have been published; however, they focus mainly on single-objective discrete or other hard optimization problems under certainty. There appears to be no book that is designed to present genetic algorithms for solving not only single-objective but also fuzzy and multiobjective optimization problems in a unified way. Genetic Algorithms And Fuzzy Multiobjective Optimization introduces the latest advances in the field of genetic algorithm optimization for 0-1 programming, integer programming, nonconvex programming, and job-shop scheduling problems under multiobjectiveness and fuzziness. In addition, the book treats a wide range of actual real world applications. The theoretical material and applications place special stress on interactive decision-making aspects of fuzzy multiobjective optimization for human-centered systems in most realistic situations when dealing with fuzziness. The intended readers of this book are senior undergraduate students, graduate students, researchers, and practitioners in the fields of operations research, computer science, industrial engineering, management science, systems engineering, and other engineering disciplines that deal with the subjects of multiobjective programming for discrete or other hard optimization problems under fuzziness. Real world research applications are used throughout the book to illustrate the presentation. These applications are drawn from complex problems. Examples include flexible scheduling in a machine center, operation planning of district heating and cooling plants, and coal purchase planning in an actual electric power plant.
Genetic programming (GP), one of the most advanced forms of evolutionary computation, has been highly successful as a technique for getting computers to automatically solve problems without having to tell them explicitly how. Since its inceptions more than ten years ago, GP has been used to solve practical problems in a variety of application fields. Along with this ad-hoc engineering approaches interest increased in how and why GP works. This book provides a coherent consolidation of recent work on the theoretical foundations of GP. A concise introduction to GP and genetic algorithms (GA) is followed by a discussion of fitness landscapes and other theoretical approaches to natural and artificial evolution. Having surveyed early approaches to GP theory it presents new exact schema analysis, showing that it applies to GP as well as to the simpler GAs. New results on the potentially infinite number of possible programs are followed by two chapters applying these new techniques.
The expansion of digital data has transformed various sectors of business such as healthcare, industrial manufacturing, and transportation. A new way of solving business problems has emerged through the use of machine learning techniques in conjunction with big data analytics. Deep Learning Innovations and Their Convergence With Big Data is a pivotal reference for the latest scholarly research on upcoming trends in data analytics and potential technologies that will facilitate insight in various domains of science, industry, business, and consumer applications. Featuring extensive coverage on a broad range of topics and perspectives such as deep neural network, domain adaptation modeling, and threat detection, this book is ideally designed for researchers, professionals, and students seeking current research on the latest trends in the field of deep learning techniques in big data analytics. Contents include: Deep Auto-Encoders Deep Neural Network Domain Adaptation Modeling Multilayer Perceptron (MLP) Natural Language Processing (NLP) Restricted Boltzmann Machines (RBM) Threat Detection |
![]() ![]() You may like...
Web Portal Design, Implementation…
Jana Polgar, Greg Adamson
Hardcover
R5,203
Discovery Miles 52 030
Web Technologies & Applications
Sammulal Porika, M Peddi Kishore
Hardcover
Web Services - Concepts, Methodologies…
Information Reso Management Association
Hardcover
R9,718
Discovery Miles 97 180
|