0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (2)
  • R250 - R500 (21)
  • R500+ (2,247)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Applications of computing > Artificial intelligence > Machine learning

Evolutionary Computation in Data Mining (Hardcover, 2005 ed.): Ashish Ghosh Evolutionary Computation in Data Mining (Hardcover, 2005 ed.)
Ashish Ghosh
R2,804 Discovery Miles 28 040 Ships in 18 - 22 working days

Data mining (DM) consists of extracting interesting knowledge from re- world, large & complex data sets; and is the core step of a broader process, called the knowledge discovery from databases (KDD) process. In addition to the DM step, which actually extracts knowledge from data, the KDD process includes several preprocessing (or data preparation) and post-processing (or knowledge refinement) steps. The goal of data preprocessing methods is to transform the data to facilitate the application of a (or several) given DM algorithm(s), whereas the goal of knowledge refinement methods is to validate and refine discovered knowledge. Ideally, discovered knowledge should be not only accurate, but also comprehensible and interesting to the user. The total process is highly computation intensive. The idea of automatically discovering knowledge from databases is a very attractive and challenging task, both for academia and for industry. Hence, there has been a growing interest in data mining in several AI-related areas, including evolutionary algorithms (EAs). The main motivation for applying EAs to KDD tasks is that they are robust and adaptive search methods, which perform a global search in the space of candidate solutions (for instance, rules or another form of knowledge representation).

Machine Learning for Time Series Forecasting with Python (Paperback): F Lazzeri Machine Learning for Time Series Forecasting with Python (Paperback)
F Lazzeri
R1,008 Discovery Miles 10 080 Ships in 10 - 15 working days

Learn how to apply the principles of machine learning to time series modeling with this indispensable resource Machine Learning for Time Series Forecasting with Python is an incisive and straightforward examination of one of the most crucial elements of decision-making in finance, marketing, education, and healthcare: time series modeling. Despite the centrality of time series forecasting, few business analysts are familiar with the power or utility of applying machine learning to time series modeling. Author Francesca Lazzeri, a distinguished machine learning scientist and economist, corrects that deficiency by providing readers with comprehensive and approachable explanation and treatment of the application of machine learning to time series forecasting. Written for readers who have little to no experience in time series forecasting or machine learning, the book comprehensively covers all the topics necessary to: Understand time series forecasting concepts, such as stationarity, horizon, trend, and seasonality Prepare time series data for modeling Evaluate time series forecasting models' performance and accuracy Understand when to use neural networks instead of traditional time series models in time series forecasting Machine Learning for Time Series Forecasting with Python is full real-world examples, resources and concrete strategies to help readers explore and transform data and develop usable, practical time series forecasts. Perfect for entry-level data scientists, business analysts, developers, and researchers, this book is an invaluable and indispensable guide to the fundamental and advanced concepts of machine learning applied to time series modeling.

Computational Finance with R (Hardcover, 1st ed. 2023): Rituparna Sen, Sourish Das Computational Finance with R (Hardcover, 1st ed. 2023)
Rituparna Sen, Sourish Das
R3,990 Discovery Miles 39 900 Ships in 10 - 15 working days

This book prepares students to execute the quantitative and computational needs of the finance industry. The quantitative methods are explained in detail with examples from real financial problems like option pricing, risk management, portfolio selection, etc. Codes are provided in R programming language to execute the methods. Tables and figures, often with real data, illustrate the codes. References to related work are intended to aid the reader to pursue areas of specific interest in further detail. The comprehensive background with economic, statistical, mathematical, and computational theory strengthens the understanding. The coverage is broad, and linkages between different sections are explained. The primary audience is graduate students, while it should also be accessible to advanced undergraduates. Practitioners working in the finance industry will also benefit.

Strength or Accuracy: Credit Assignment in Learning Classifier Systems (Hardcover, 2004 ed.): Tim Kovacs Strength or Accuracy: Credit Assignment in Learning Classifier Systems (Hardcover, 2004 ed.)
Tim Kovacs
R4,183 Discovery Miles 41 830 Ships in 18 - 22 working days

The Distinguished Dissertations series is published on behalf of the Conference of Professors and Heads of Computing and the British Computer Society, who annually select the best British PhD dissertations in computer science for publication. The dissertations are selected on behalf of the CPHC by a panel of eight academics. Each dissertation chosen makes a noteworthy contribution to the subject and reaches a high standard of exposition, placing all results clearly in the context of computer science as a whole. In this way computer scientists with significantly different interests are able to grasp the essentials - or even find a means of entry - to an unfamiliar research topic. Machine learning promises both to create machine intelligence and to shed light on natural intelligence. A fundamental issue for either endevour is that of credit assignment, which we can pose as follows: how can we credit individual components of a complex adaptive system for their often subtle effects on the world? For example, in a game of chess, how did each move (and the reasoning behind it) contribute to the outcome? This text studies aspects of credit assignment in learning classifier systems, which combine evolutionary algorithms with reinforcement learning methods to address a range of tasks from pattern classification to stochastic control to simulation of learning in animals. Credit assignment in classifier systems is complicated by two features: 1) their components are frequently modified by evolutionary search, and 2) components tend to interact. Classifier systems are re-examined from first principles and the result is, primarily, a formalization of learning in these systems, and a body of theoryrelating types of classifier systems, learning tasks, and credit assignment pathologies. Most significantly, it is shown that both of the main approaches have difficulties with certain tasks, which the other type does not.

Evolutionary Algorithms and Agricultural Systems (Hardcover, 2002 ed.): David G. Mayer Evolutionary Algorithms and Agricultural Systems (Hardcover, 2002 ed.)
David G. Mayer
R5,201 Discovery Miles 52 010 Ships in 18 - 22 working days

Evolutionary Algorithms and Agricultural Systems deals with the practical application of evolutionary algorithms to the study and management of agricultural systems. The rationale of systems research methodology is introduced, and examples listed of real-world applications. It is the integration of these agricultural systems models with optimization techniques, primarily genetic algorithms, which forms the focus of this book. The advantages are outlined, with examples of agricultural models ranging from national and industry-wide studies down to the within-farm scale. The potential problems of this approach are also discussed, along with practical methods of resolving these problems. Agricultural applications using alternate optimization techniques (gradient and direct-search methods, simulated annealing and quenching, and the tabu search strategy) are also listed and discussed. The particular problems and methodologies of these algorithms, including advantageous features that may benefit a hybrid approach or be usefully incorporated into evolutionary algorithms, are outlined. From consideration of this and the published examples, it is concluded that evolutionary algorithms are the superior method for the practical optimization of models of agricultural and natural systems. General recommendations on robust options and parameter settings for evolutionary algorithms are given for use in future studies. Evolutionary Algorithms and Agricultural Systems will prove useful to practitioners and researchers applying these methods to the optimization of agricultural or natural systems, and would also be suited as a text for systems management, applied modeling, or operations research.

Learning and Generalisation - With Applications to Neural Networks (Hardcover, 2nd ed. 2002): Mathukumalli Vidyasagar Learning and Generalisation - With Applications to Neural Networks (Hardcover, 2nd ed. 2002)
Mathukumalli Vidyasagar
R4,963 Discovery Miles 49 630 Ships in 18 - 22 working days

Learning and Generalization provides a formal mathematical theory for addressing intuitive questions of the type: * How does a machine learn a new concept on the basis of examples? * How can a neural network, after sufficient training, correctly predict the outcome of a previously unseen input? * How much training is required to achieve a specified level of accuracy in the prediction? * How can one identify the dynamical behaviour of a nonlinear control system by observing its input-output behaviour over a finite interval of time? The first edition, A Theory of Learning and Generalization, was the first book to treat the problem of machine learning in conjunction with the theory of empirical process, the latter being a well-established branch of probability theory. The treatment of both topics side-by-side leads to new insights, as well as new results in both topics. The second edition extends and improves upon this material, covering new areas including: * Support vector machines (SVM's) * Fat-shattering dimensions and applications to neural network learning * Learning with dependent samples generated by a beta-mixing process * Connections between system identification and learning theory * Probabilistic solution of 'intractable problems' in robust control and matrix theory using randomized algorithms It also contains solutions to some of the open problems posed in the first edition, while adding new open problems. This book is essential reading for control and system theorists, neural network researchers, theoretical computer scientists and probabilists The Communications and Control Engineering series reflects the major technological advances which have a great impact in the fields of communication and control. It reports on the research in industrial and academic institutions around the world to exploit the new possibilities which are becoming available

The Nature of Statistical Learning Theory (Hardcover, 2nd ed. 2000): Vladimir Vapnik The Nature of Statistical Learning Theory (Hardcover, 2nd ed. 2000)
Vladimir Vapnik
R5,979 Discovery Miles 59 790 Ships in 18 - 22 working days

The aim of this book is to discuss the fundamental ideas which lie behind the statistical theory of learning and generalization. It considers learning as a general problem of function estimation based on empirical data. Omitting proofs and technical details, the author concentrates on discussing the main results of learning theory and their connections to fundamental problems in statistics. These include: * the setting of learning problems based on the model of minimizing the risk functional from empirical data * a comprehensive analysis of the empirical risk minimization principle including necessary and sufficient conditions for its consistency * non-asymptotic bounds for the risk achieved using the empirical risk minimization principle * principles for controlling the generalization ability of learning machines using small sample sizes based on these bounds * the Support Vector methods that control the generalization ability when estimating function using small sample size. The second edition of the book contains three new chapters devoted to further development of the learning theory and SVM techniques. These include: * the theory of direct method of learning based on solving multidimensional integral equations for density, conditional probability, and conditional density estimation * a new inductive principle of learning. Written in a readable and concise style, the book is intended for statisticians, mathematicians, physicists, and computer scientists. Vladimir N. Vapnik is Technology Leader AT&T Labs-Research and Professor of London University. He is one of the founders of statistical learning theory, and the author of seven books published in English, Russian, German, and Chinese.

Extreme Learning Machines 2013: Algorithms and Applications (Hardcover, 2014): Fuchen Sun, Kar-Ann Toh, Manuel Grana Romay,... Extreme Learning Machines 2013: Algorithms and Applications (Hardcover, 2014)
Fuchen Sun, Kar-Ann Toh, Manuel Grana Romay, Kezhi Mao
R3,589 R3,328 Discovery Miles 33 280 Save R261 (7%) Ships in 10 - 15 working days

In recent years, ELM has emerged as a revolutionary technique of computational intelligence, and has attracted considerable attentions. An extreme learning machine (ELM) is a single layer feed-forward neural network alike learning system, whose connections from the input layer to the hidden layer are randomly generated, while the connections from the hidden layer to the output layer are learned through linear learning methods. The outstanding merits of extreme learning machine (ELM) are its fast learning speed, trivial human intervene and high scalability.

This book contains some selected papers from the International Conference on Extreme Learning Machine 2013, which was held in Beijing China, October 15-17, 2013. This conference aims to bring together the researchers and practitioners of extreme learning machine from a variety of fields including artificial intelligence, biomedical engineering and bioinformatics, system modelling and control, and signal and image processing, to promote research and discussions of learning without iterative tuning."

This book covers algorithms and applications of ELM. It gives readers a glance of the newest developments of ELM."

Exploration of Visual Data (Hardcover, 2003 ed.): Sean Xiang Zhou, Yong Rui, Thomas S. Huang Exploration of Visual Data (Hardcover, 2003 ed.)
Sean Xiang Zhou, Yong Rui, Thomas S. Huang
R2,761 Discovery Miles 27 610 Ships in 18 - 22 working days

Exploration of Visual Data presents latest research efforts in the area of content-based exploration of image and video data. The main objective is to bridge the semantic gap between high-level concepts in the human mind and low-level features extractable by the machines.

The two key issues emphasized are "content-awareness" and "user-in-the-loop." The authors provide a comprehensive review on algorithms for visual feature extraction based on color, texture, shape, and structure, and techniques for incorporating such information to aid browsing, exploration, search, and streaming of image and video data. They also discuss issues related to the mixed use of textual and low-level visual features to facilitate more effective access of multimedia data.

To bridge the semantic gap, significant recent research efforts have also been put on learning during user interactions, which is also known as "relevance feedback." The difficulty and challenge also come from the personalized information need of each user and a small amount of feedbacks the machine could obtain through real-time user interaction. The authors present and discuss several recently proposed classification and learning techniques that are specifically designed for this problem, with kernel- and boosting-based approaches for nonlinear extensions.

Exploration of Visual Data provides state-of-the-art materials on the topics of content-based description of visual data, content-based low-bitrate video streaming, and latest asymmetric and nonlinear relevance feedback algorithms, which to date are unpublished.

Exploration of Visual Data will be of interest to researchers, practitioners, and graduate-level students in theareas of multimedia information systems, multimedia databases, computer vision, machine learning.

The Informational Complexity of Learning - Perspectives on Neural Networks and Generative Grammar (Hardcover, 1998 ed.): Partha... The Informational Complexity of Learning - Perspectives on Neural Networks and Generative Grammar (Hardcover, 1998 ed.)
Partha Niyogi
R2,784 Discovery Miles 27 840 Ships in 18 - 22 working days

Among other topics, The Informational Complexity of Learning: Perspectives on Neural Networks and Generative Grammar brings together two important but very different learning problems within the same analytical framework. The first concerns the problem of learning functional mappings using neural networks, followed by learning natural language grammars in the principles and parameters tradition of Chomsky. These two learning problems are seemingly very different. Neural networks are real-valued, infinite-dimensional, continuous mappings. On the other hand, grammars are boolean-valued, finite-dimensional, discrete (symbolic) mappings. Furthermore the research communities that work in the two areas almost never overlap. The book's objective is to bridge this gap. It uses the formal techniques developed in statistical learning theory and theoretical computer science over the last decade to analyze both kinds of learning problems. By asking the same question - how much information does it take to learn? - of both problems, it highlights their similarities and differences. Specific results include model selection in neural networks, active learning, language learning and evolutionary models of language change. The Informational Complexity of Learning: Perspectives on Neural Networks and Generative Grammar is a very interdisciplinary work. Anyone interested in the interaction of computer science and cognitive science should enjoy the book. Researchers in artificial intelligence, neural networks, linguistics, theoretical computer science, and statistics will find it particularly relevant.

Identification, Adaptation, Learning - The Science of Learning Models from Data (Hardcover, 1996 ed.): Sergio Bittanti, Giorgio... Identification, Adaptation, Learning - The Science of Learning Models from Data (Hardcover, 1996 ed.)
Sergio Bittanti, Giorgio Picci
R5,472 Discovery Miles 54 720 Ships in 18 - 22 working days

This book collects the lectures given at the NATO Advanced Study Institute From Identijication to Learning held in Villa Olmo, Como, Italy, from August 22 to September 2, 1994. The school was devoted to the themes of Identijication, Adaptation and Learning, as they are currently understood in the Information and Contral engineering community, their development in the last few decades, their inter connections and their applications. These titles describe challenging, exciting and rapidly growing research areas which are of interest both to contral and communication engineers and to statisticians and computer scientists. In accordance with the general goals of the Institute, and notwithstanding the rat her advanced level of the topics discussed, the presentations have been generally kept at a fairly tutorial level. For this reason this book should be valuable to a variety of rearchers and to graduate students interested in the general area of Control, Signals and Information Pracessing. As the goal of the school was to explore a common methodologicalline of reading the issues, the flavor is quite interdisciplinary. We regard this as an original and valuable feature of this book."

Learning Search Control Knowledge - An Explanation-Based Approach (Hardcover, 1988 ed.): Steven Minton Learning Search Control Knowledge - An Explanation-Based Approach (Hardcover, 1988 ed.)
Steven Minton
R2,770 Discovery Miles 27 700 Ships in 18 - 22 working days

The ability to learn from experience is a fundamental requirement for intelligence. One of the most basic characteristics of human intelligence is that people can learn from problem solving, so that they become more adept at solving problems in a given domain as they gain experience. This book investigates how computers may be programmed so that they too can learn from experience. Specifically, the aim is to take a very general, but inefficient, problem solving system and train it on a set of problems from a given domain, so that it can transform itself into a specialized, efficient problem solver for that domain. on a knowledge-intensive Recently there has been considerable progress made learning approach, explanation-based learning (EBL), that brings us closer to this possibility. As demonstrated in this book, EBL can be used to analyze a problem solving episode in order to acquire control knowledge. Control knowledge guides the problem solver's search by indicating the best alternatives to pursue at each choice point. An EBL system can produce domain specific control knowledge by explaining why the choices made during a problem solving episode were, or were not, appropriate.

Extending the Scalability of Linkage Learning Genetic Algorithms - Theory & Practice (Hardcover, 2006 ed.): Ying-Ping Chen Extending the Scalability of Linkage Learning Genetic Algorithms - Theory & Practice (Hardcover, 2006 ed.)
Ying-Ping Chen
R2,725 Discovery Miles 27 250 Ships in 18 - 22 working days

Genetic algorithms (GAs) are powerful search techniques based on principles of evolution and widely applied to solve problems in many disciplines. However, most GAs employed in practice nowadays are unable to learn genetic linkage and suffer from the linkage problem. The linkage learning genetic algorithm (LLGA) was proposed to tackle the linkage problem with several specially designed mechanisms. While the LLGA performs much better on badly scaled problems than simple GAs, it does not work well on uniformly scaled problems as other competent GAs. Therefore, we need to understand why it is so and need to know how to design a better LLGA or whether there are certain limits of such a linkage learning process. This book aims to gain better understanding of the LLGA in theory and to improve the LLGA's performance in practice. It starts with a survey of the existing genetic linkage learning techniques and describes the steps and approaches taken to tackle the research topics, including using promoters, developing the convergence time model, and adopting subchromosomes.

Ensemble Machine Learning - Methods and Applications (Hardcover, 2012): Cha Zhang, Yunqian Ma Ensemble Machine Learning - Methods and Applications (Hardcover, 2012)
Cha Zhang, Yunqian Ma
R5,863 Discovery Miles 58 630 Ships in 18 - 22 working days

It is common wisdom that gathering a variety of views and inputs improves the process of decision making, and, indeed, underpins a democratic society. Dubbed "ensemble learning" by researchers in computational intelligence and machine learning, it is known to improve a decision system's robustness and accuracy. Now, fresh developments are allowing researchers to unleash the power of ensemble learning in an increasing range of real-world applications. Ensemble learning algorithms such as "boosting" and "random forest" facilitate solutions to key computational issues such as face recognition and are now being applied in areas as diverse as object tracking and bioinformatics.

Responding to a shortage of literature dedicated to the topic, this volume offers comprehensive coverage of state-of-the-art ensemble learning techniques, including the random forest skeleton tracking algorithm in the Xbox Kinect sensor, which bypasses the need for game controllers. At once a solid theoretical study and a practical guide, the volume is a windfall for researchers and practitioners alike.

"

Estimation of Distribution Algorithms - A New Tool for Evolutionary Computation (Hardcover, 2002 ed.): Pedro Larranaga, Jose A.... Estimation of Distribution Algorithms - A New Tool for Evolutionary Computation (Hardcover, 2002 ed.)
Pedro Larranaga, Jose A. Lozano
R5,365 Discovery Miles 53 650 Ships in 18 - 22 working days

Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation is devoted to a new paradigm for evolutionary computation, named estimation of distribution algorithms (EDAs). This new class of algorithms generalizes genetic algorithms by replacing the crossover and mutation operators with learning and sampling from the probability distribution of the best individuals of the population at each iteration of the algorithm. Working in such a way, the relationships between the variables involved in the problem domain are explicitly and effectively captured and exploited. This text constitutes the first compilation and review of the techniques and applications of this new tool for performing evolutionary computation. Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation is clearly divided into three parts. Part I is dedicated to the foundations of EDAs. In this part, after introducing some probabilistic graphical models - Bayesian and Gaussian networks - a review of existing EDA approaches is presented, as well as some new methods based on more flexible probabilistic graphical models. A mathematical modeling of discrete EDAs is also presented. Part II covers several applications of EDAs in some classical optimization problems: the travelling salesman problem, the job scheduling problem, and the knapsack problem. EDAs are also applied to the optimization of some well-known combinatorial and continuous functions. Part III presents the application of EDAs to solve some problems that arise in the machine learning field: feature subset selection, feature weighting in K-NN classifiers, rule induction, partial abductive inference in Bayesian networks, partitional clustering, and the search for optimal weights in artificial neural networks. Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation is a useful and interesting tool for researchers working in the field of evolutionary computation and for engineers who face real-world optimization problems. This book may also be used by graduate students and researchers in computer science. ... I urge those who are interested in EDAs to study this well-crafted book today.' David E. Goldberg, University of Illinois Champaign-Urbana.

An Elementary Introduction to Statistical Learning  Theory (Hardcover): S. R Kulkarni An Elementary Introduction to Statistical Learning Theory (Hardcover)
S. R Kulkarni
R2,819 Discovery Miles 28 190 Ships in 18 - 22 working days

A thought-provoking look at statistical learning theory and its role in understanding human learning and inductive reasoning

A joint endeavor from leading researchers in the fields of philosophy and electrical engineering, "An Elementary Introduction to Statistical Learning Theory" is a comprehensive and accessible primer on the rapidly evolving fields of statistical pattern recognition and statistical learning theory. Explaining these areas at a level and in a way that is not often found in other books on the topic, the authors present the basic theory behind contemporary machine learning and uniquely utilize its foundations as a framework for philosophical thinking about inductive inference.

Promoting the fundamental goal of statistical learning, knowing what is achievable and what is not, this book demonstrates the value of a systematic methodology when used along with the needed techniques for evaluating the performance of a learning system. First, an introduction to machine learning is presented that includes brief discussions of applications such as image recognition, speech recognition, medical diagnostics, and statistical arbitrage. To enhance accessibility, two chapters on relevant aspects of probability theory are provided. Subsequent chapters feature coverage of topics such as the pattern recognition problem, optimal Bayes decision rule, the nearest neighbor rule, kernel rules, neural networks, support vector machines, and boosting.

Appendices throughout the book explore the relationship between the discussed material and related topics from mathematics, philosophy, psychology, and statistics, drawing insightful connections between problems in these areas and statistical learning theory. All chapters conclude with a summary section, a set of practice questions, and a reference sections that supplies historical notes and additional resources for further study.

"An Elementary Introduction to Statistical Learning Theory" is an excellent book for courses on statistical learning theory, pattern recognition, and machine learning at the upper-undergraduate and graduate levels. It also serves as an introductory reference for researchers and practitioners in the fields of engineering, computer science, philosophy, and cognitive science that would like to further their knowledge of the topic.

Dynamics of Advanced Materials and Smart Structures (Hardcover, 2003 ed.): Kazumi Watanabe, Franz Ziegler Dynamics of Advanced Materials and Smart Structures (Hardcover, 2003 ed.)
Kazumi Watanabe, Franz Ziegler
R5,414 Discovery Miles 54 140 Ships in 18 - 22 working days

Two key words for mechanical engineering in the future are Micro and Intelligence. It is weIl known that the leadership in the intelligence technology is a marter of vital importance for the future status of industrial society, and thus national research projects for intelligent materials, structures and machines have started not only in advanced countries, but also in developing countries. Materials and structures which have self-sensing, diagnosis and actuating systems, are called intelligent or smart, and are of growing research interest in the world. In this situation, the IUT AM symposium on Dynamics 0/ Advanced Materials and Smart Structures was a timely one. Smart materials and structures are those equipped with sensors and actuators to achieve their designed performance in achanging environment. They have complex structural properties and mechanical responses. Many engineering problems, such as interface and edge phenomena, mechanical and electro-magnetic interaction/coupling and sensing, actuating and control techniques, arise in the development ofintelligent structures. Due to the multi-disciplinary nature ofthese problems, all ofthe classical sciences and technologies, such as applied mathematics, material science, solid and fluid mechanics, control techniques and others must be assembled and used to solve them. IUTAM weIl understands the importance ofthis emerging technology. An IUTAM symposium on Smart Structures and Structronic Systems (Chaired by U.

Empirical Inference - Festschrift in Honor of Vladimir N. Vapnik (Hardcover, 2013 ed.): Bernhard Schoelkopf, Zhiyuan Luo,... Empirical Inference - Festschrift in Honor of Vladimir N. Vapnik (Hardcover, 2013 ed.)
Bernhard Schoelkopf, Zhiyuan Luo, Vladimir Vovk
R3,458 R1,959 Discovery Miles 19 590 Save R1,499 (43%) Ships in 10 - 15 working days

This book honours the outstanding contributions of Vladimir Vapnik, a rare example of a scientist for whom the following statements hold true simultaneously: his work led to the inception of a new field of research, the theory of statistical learning and empirical inference; he has lived to see the field blossom; and he is still as active as ever. He started analyzing learning algorithms in the 1960s and he invented the first version of the generalized portrait algorithm. He later developed one of the most successful methods in machine learning, the support vector machine (SVM) - more than just an algorithm, this was a new approach to learning problems, pioneering the use of functional analysis and convex optimization in machine learning. Part I of this book contains three chapters describing and witnessing some of Vladimir Vapnik's contributions to science. In the first chapter, Leon Bottou discusses the seminal paper published in 1968 by Vapnik and Chervonenkis that lay the foundations of statistical learning theory, and the second chapter is an English-language translation of that original paper. In the third chapter, Alexey Chervonenkis presents a first-hand account of the early history of SVMs and valuable insights into the first steps in the development of the SVM in the framework of the generalised portrait method. The remaining chapters, by leading scientists in domains such as statistics, theoretical computer science, and mathematics, address substantial topics in the theory and practice of statistical learning theory, including SVMs and other kernel-based methods, boosting, PAC-Bayesian theory, online and transductive learning, loss functions, learnable function classes, notions of complexity for function classes, multitask learning, and hypothesis selection. These contributions include historical and context notes, short surveys, and comments on future research directions. This book will be of interest to researchers, engineers, and graduate students engaged with all aspects of statistical learning.

Privacy-Preserving Machine Learning for Speech Processing (Hardcover, 2013 ed.): Manas A. Pathak Privacy-Preserving Machine Learning for Speech Processing (Hardcover, 2013 ed.)
Manas A. Pathak
R3,236 Discovery Miles 32 360 Ships in 18 - 22 working days

This thesis discusses the privacy issues in speech-based applications such as biometric authentication, surveillance, and external speech processing services. Author Manas A. Pathak presents solutions for privacy-preserving speech processing applications such as speaker verification, speaker identification and speech recognition. The author also introduces some of the tools from cryptography and machine learning and current techniques for improving the efficiency and scalability of the presented solutions. Experiments with prototype implementations of the solutions for execution time and accuracy on standardized speech datasets are also included in the text. Using the framework proposed may now make it possible for a surveillance agency to listen for a known terrorist without being able to hear conversation from non-targeted, innocent civilians."

Foundations of Knowledge Acquisition - Machine Learning (Hardcover, 1993 ed.): Alan L. Meyrowitz, Susan Chipman Foundations of Knowledge Acquisition - Machine Learning (Hardcover, 1993 ed.)
Alan L. Meyrowitz, Susan Chipman
R4,197 Discovery Miles 41 970 Ships in 18 - 22 working days

One of the most intriguing questions about the new computer technology that has appeared over the past few decades is whether we humans will ever be able to make computers learn. As is painfully obvious to even the most casual computer user, most current computers do not. Yet if we could devise learning techniques that enable computers to routinely improve their performance through experience, the impact would be enormous. The result would be an explosion of new computer applications that would suddenly become economically feasible (e. g. , personalized computer assistants that automatically tune themselves to the needs of individual users), and a dramatic improvement in the quality of current computer applications (e. g. , imagine an airline scheduling program that improves its scheduling method based on analyzing past delays). And while the potential economic impact of successful learning methods is sufficient reason to invest in research into machine learning, there is a second significant reason: studying machine learning helps us understand our own human learning abilities and disabilities, leading to the possibility of improved methods in education. While many open questions remain about the methods by which machines and humans might learn, significant progress has been made.

Foundations of Knowledge Acquisition - Cognitive Models of Complex Learning (Hardcover, 1993 ed.): Susan Chipman, Alan L.... Foundations of Knowledge Acquisition - Cognitive Models of Complex Learning (Hardcover, 1993 ed.)
Susan Chipman, Alan L. Meyrowitz
R4,199 Discovery Miles 41 990 Ships in 18 - 22 working days

One of the most intriguing questions about the new computer technology that has appeared over the past few decades is whether we humans will ever be able to make computers learn. As is painfully obvious to even the most casual computer user, most current computers do not. Yet if we could devise learning techniques that enable computers to routinely improve their performance through experience, the impact would be enormous. The result would be an explosion of new computer applications that would suddenly become economically feasible (e. g. , personalized computer assistants that automatically tune themselves to the needs of individual users), and a dramatic improvement in the quality of current computer applications (e. g. , imagine an airline scheduling program that improves its scheduling method based on analyzing past delays). And while the potential economic impact ofsuccessful learning methods is sufficient reason to invest in research into machine learning, there is a second significant reason: studying machine learning helps us understand our own human learning abilities and disabilities, leading to the possibility of improved methods in education. While many open questions remain aboutthe methods by which machines and humans might learn, significant progress has been made.

Let's Ask AI - A Non-Technical Modern Approach to AI and Philosophy (Hardcover): Ingrid Seabra, Pedro Seabra, Angela Chan Let's Ask AI - A Non-Technical Modern Approach to AI and Philosophy (Hardcover)
Ingrid Seabra, Pedro Seabra, Angela Chan
R709 Discovery Miles 7 090 Ships in 10 - 15 working days
Multistrategy Learning - A Special Issue of MACHINE LEARNING (Hardcover, Reprinted from MACHINE LEARNING, 11:2-3, 1993):... Multistrategy Learning - A Special Issue of MACHINE LEARNING (Hardcover, Reprinted from MACHINE LEARNING, 11:2-3, 1993)
Ryszard S. Michalski
R5,137 Discovery Miles 51 370 Ships in 18 - 22 working days

Most machine learning research has been concerned with the development of systems that implememnt one type of inference within a single representational paradigm. Such systems, which can be called monostrategy learning systems, include those for empirical induction of decision trees or rules, explanation-based generalization, neural net learning from examples, genetic algorithm-based learning, and others. Monostrategy learning systems can be very effective and useful if learning problems to which they are applied are sufficiently narrowly defined. Many real-world applications, however, pose learning problems that go beyond the capability of monostrategy learning methods. In view of this, recent years have witnessed a growing interest in developing multistrategy systems, which integrate two or more inference types and/or paradigms within one learning system. Such multistrategy systems take advantage of the complementarity of different inference types or representational mechanisms. Therefore, they have a potential to be more versatile and more powerful than monostrategy systems. On the other hand, due to their greater complexity, their development is significantly more difficult and represents a new great challenge to the machine learning community. Multistrategy Learning contains contributions characteristic of the current research in this area.

Algorithmic Learning in a Random World (Hardcover, 2005 ed.): Vladimir Vovk, Alex Gammerman, Glenn Shafer Algorithmic Learning in a Random World (Hardcover, 2005 ed.)
Vladimir Vovk, Alex Gammerman, Glenn Shafer
R4,702 Discovery Miles 47 020 Ships in 10 - 15 working days

Algorithmic Learning in a Random World describes recent theoretical and experimental developments in building computable approximations to Kolmogorov's algorithmic notion of randomness. Based on these approximations, a new set of machine learning algorithms have been developed that can be used to make predictions and to estimate their confidence and credibility in high-dimensional spaces under the usual assumption that the data are independent and identically distributed (assumption of randomness). Another aim of this unique monograph is to outline some limits of predictions: The approach based on algorithmic theory of randomness allows for the proof of impossibility of prediction in certain situations. The book describes how several important machine learning problems, such as density estimation in high-dimensional spaces, cannot be solved if the only assumption is randomness.

Data Science, Analytics and Machine Learning with R (Paperback): Luiz Favero, Patricia Belfiore, Rafael De Freitas Souza Data Science, Analytics and Machine Learning with R (Paperback)
Luiz Favero, Patricia Belfiore, Rafael De Freitas Souza
R3,003 Discovery Miles 30 030 Ships in 10 - 15 working days

Data Science, Analytics and Machine Learning with R explains the principles of data mining and machine learning techniques and accentuates the importance of applied and multivariate modeling. The book emphasizes the fundamentals of each technique, with step-by-step codes and real-world examples with data from areas such as medicine and health, biology, engineering, technology and related sciences. Examples use the most recent R language syntax, with recognized robust, widespread and current packages. Code scripts are exhaustively commented, making it clear to readers what happens in each command. For data collection, readers are instructed how to build their own robots from the very beginning. In addition, an entire chapter focuses on the concept of spatial analysis, allowing readers to build their own maps through geo-referenced data (such as in epidemiologic research) and some basic statistical techniques. Other chapters cover ensemble and uplift modeling and GLMM (Generalized Linear Mixed Models) estimations, both linear and nonlinear.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Hamiltonian Monte Carlo Methods in…
Tshilidzi Marwala, Rendani Mbuvha, … Paperback R3,518 Discovery Miles 35 180
Research Anthology on Machine Learning…
Information R Management Association Hardcover R16,088 Discovery Miles 160 880
Deep Learning for Sustainable…
Ramesh Poonia, Vijander Singh, … Paperback R2,957 Discovery Miles 29 570
Deep Learning in Bioinformatics…
Habib Izadkhah Paperback R3,360 Discovery Miles 33 600
Machine Learning for Biometrics…
Partha Pratim Sarangi, Madhumita Panda, … Paperback R2,570 Discovery Miles 25 700
Machine Learning and Data Mining
I Kononenko, M Kukar Paperback R1,903 Discovery Miles 19 030
Hardware Accelerator Systems for…
Shiho Kim, Ganesh Chandra Deka Hardcover R3,950 Discovery Miles 39 500
Autonomous Mobile Robots - Planning…
Rahul Kala Paperback R4,294 Discovery Miles 42 940
Learning-Based Adaptive Control - An…
Mouhacine Benosman Paperback R2,569 Discovery Miles 25 690
Statistical Modeling in Machine Learning…
Tilottama Goswami, G. R. Sinha Paperback R3,925 Discovery Miles 39 250

 

Partners