![]() |
![]() |
Your cart is empty |
||
Showing 1 - 9 of 9 matches in All Departments
Density estimation has evolved enormously since the days of bar plots and histograms, but researchers and users are still struggling with the problem of the selection of the bin widths. This text explores a new paradigm for the data-based or automatic selection of the free parameters of density estimates in general so that the expected error is within a given constant multiple of the best possible error. The paradigm can be used in nearly all density estimates and for most model selection problems, both parametric and nonparametric. It is the first book on this topic. The text is intended for first-year graduate students in statistics and learning theory, and offers a host of opportunities for further research and thesis topics. Each chapter corresponds roughly to one lecture, and is supplemented with many classroom exercises. A one year course in probability theory at the level of Feller's Volume 1 should be more than adequate preparation. Gabor Lugosi is Professor at Universitat Pompeu Fabra in Barcelona, and Luc Debroye is Professor at McGill University in Montreal. In 1996, the authors, together with Lászlo Györfi, published the successful text, A Probabilistic Theory of Pattern Recognition with Springer-Verlag. Both authors have made many contributions in the area of nonparametric estimation.
A self-contained and coherent account of probabilistic techniques, covering: distance measures, kernel rules, nearest neighbour rules, Vapnik-Chervonenkis theory, parametric classification, and feature extraction. Each chapter concludes with problems and exercises to further the readers understanding. Both research workers and graduate students will benefit from this wide-ranging and up-to-date account of a fast- moving field.
Concentration inequalities for functions of independent random
variables is an area of probability theory that has witnessed a
great revolution in the last few decades, and has applications in a
wide variety of areas such as machine learning, statistics,
discrete mathematics, and high-dimensional geometry. Roughly
speaking, if a function of many independent random variables does
not depend too much on any of the variables then it is concentrated
in the sense that with high probability, it is close to its
expected value. This book offers a host of inequalities to
illustrate this rich theory in an accessible way by covering the
key developments and applications in the field.
Pattern recognition presents one of the most significant challenges for scientists and engineers, and many different approaches have been proposed. The aim of this book is to provide a self-contained account of probabilistic analysis of these approaches. The book includes a discussion of distance measures, nonparametric methods based on kernels or nearest neighbors, Vapnik-Chervonenkis theory, epsilon entropy, parametric classification, error estimation, free classifiers, and neural networks. Wherever possible, distribution-free properties and inequalities are derived. A substantial portion of the results or the analysis is new. Over 430 problems and exercises complement the material.
Density estimation has evolved enormously since the days of bar plots and histograms, but researchers and users are still struggling with the problem of the selection of the bin widths. This book is the first to explore a new paradigm for the data-based or automatic selection of the free parameters of density estimates in general so that the expected error is within a given constant multiple of the best possible error. The paradigm can be used in nearly all density estimates and for most model selection problems, both parametric and nonparametric.
This book constitutes the refereed proceedings of the 20th International Conference on Algorithmic Learning Theory, ALT 2009, held in Porto, Portugal, in October 2009, co-located with the 12th International Conference on Discovery Science, DS 2009. The 26 revised full papers presented together with the abstracts of 5 invited talks were carefully reviewed and selected from 60 submissions. The papers are divided into topical sections of papers on online learning, learning graphs, active learning and query learning, statistical learning, inductive inference, and semisupervised and unsupervised learning. The volume also contains abstracts of the invited talks: Sanjoy Dasgupta, The Two Faces of Active Learning; Hector Geffner, Inference and learning in Planning; Jiawei Han, Mining Heterogeneous; Information Networks By Exploring the Power of Links, Yishay Mansour, Learning and Domain Adaptation; Fernando C.N. Pereira, Learning on the Web.
This book constitutes the refereed proceedings of the 19th Annual Conference on Learning Theory, COLT 2006, held in Pittsburgh, Pennsylvania, USA, June 2006. The book presents 43 revised full papers together with 2 articles on open problems and 3 invited lectures. The papers cover a wide range of topics including clustering, un- and semi-supervised learning, statistical learning theory, regularized learning and kernel methods, query learning and teaching, inductive inference, and more.
This important new text and reference for researchers and students in machine learning, game theory, statistics and information theory offers the first comprehensive treatment of the problem of predicting individual sequences. Unlike standard statistical approaches to forecasting, prediction of individual sequences does not impose any probabilistic assumption on the data-generating mechanism. Yet, prediction algorithms can be constructed that work well for all possible sequences, in the sense that their performance is always nearly as good as the best forecasting strategy in a given reference class. The central theme is the model of prediction using expert advice, a general framework within which many related problems can be cast and discussed. Repeated game playing, adaptive data compression, sequential investment in the stock market, sequential pattern analysis, and several other problems are viewed as instances of the experts' framework and analyzed from a common nonstochastic standpoint that often reveals new and intriguing connections. Old and new forecasting methods are described in a mathematically precise way in order to characterize their theoretical limitations and possibilities.
Concentration inequalities for functions of independent random variables is an area of probability theory that has witnessed a great revolution in the last few decades, and has applications in a wide variety of areas such as machine learning, statistics, discrete mathematics, and high-dimensional geometry. Roughly speaking, if a function of many independent random variables does not depend too much on any of the variables then it is concentrated in the sense that with high probability, it is close to its expected value. This book offers a host of inequalities to illustrate this rich theory in an accessible way by covering the key developments and applications in the field. The authors describe the interplay between the probabilistic structure (independence) and a variety of tools ranging from functional inequalities to transportation arguments to information theory. Applications to the study of empirical processes, random projections, random matrix theory, and threshold phenomena are also presented. A self-contained introduction to concentration inequalities, it includes a survey of concentration of sums of independent random variables, variance bounds, the entropy method, and the transportation method. Deep connections with isoperimetric problems are revealed whilst special attention is paid to applications to the supremum of empirical processes. Written by leading experts in the field and containing extensive exercise sections this book will be an invaluable resource for researchers and graduate students in mathematics, theoretical computer science, and engineering.
|
![]() ![]() You may like...
The Lie Of 1652 - A Decolonised History…
Patric Tariq Mellet
Paperback
![]()
|