![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Probability & statistics
This book presents the Statistical Learning Theory in a detailed and easy to understand way, by using practical examples, algorithms and source codes. It can be used as a textbook in graduation or undergraduation courses, for self-learners, or as reference with respect to the main theoretical concepts of Machine Learning. Fundamental concepts of Linear Algebra and Optimization applied to Machine Learning are provided, as well as source codes in R, making the book as self-contained as possible. It starts with an introduction to Machine Learning concepts and algorithms such as the Perceptron, Multilayer Perceptron and the Distance-Weighted Nearest Neighbors with examples, in order to provide the necessary foundation so the reader is able to understand the Bias-Variance Dilemma, which is the central point of the Statistical Learning Theory. Afterwards, we introduce all assumptions and formalize the Statistical Learning Theory, allowing the practical study of different classification algorithms. Then, we proceed with concentration inequalities until arriving to the Generalization and the Large-Margin bounds, providing the main motivations for the Support Vector Machines. From that, we introduce all necessary optimization concepts related to the implementation of Support Vector Machines. To provide a next stage of development, the book finishes with a discussion on SVM kernels as a way and motivation to study data spaces and improve classification results.
This book examines application and methods to incorporating stochastic parameter variations into the optimization process to decrease expense in corrective measures. Basic types of deterministic substitute problems occurring mostly in practice involve i) minimization of the expected primary costs subject to expected recourse cost constraints (reliability constraints) and remaining deterministic constraints, e.g. box constraints, as well as ii) minimization of the expected total costs (costs of construction, design, recourse costs, etc.) subject to the remaining deterministic constraints. After an introduction into the theory of dynamic control systems with random parameters, the major control laws are described, as open-loop control, closed-loop, feedback control and open-loop feedback control, used for iterative construction of feedback controls. For approximate solution of optimization and control problems with random parameters and involving expected cost/loss-type objective, constraint functions, Taylor expansion procedures, and Homotopy methods are considered, Examples and applications to stochastic optimization of regulators are given. Moreover, for reliability-based analysis and optimal design problems, corresponding optimization-based limit state functions are constructed. Because of the complexity of concrete optimization/control problems and their lack of the mathematical regularity as required of Mathematical Programming (MP) techniques, other optimization techniques, like random search methods (RSM) became increasingly important. Basic results on the convergence and convergence rates of random search methods are presented. Moreover, for the improvement of the - sometimes very low - convergence rate of RSM, search methods based on optimal stochastic decision processes are presented. In order to improve the convergence behavior of RSM, the random search procedure is embedded into a stochastic decision process for an optimal control of the probability distributions of the search variates (mutation random variables).
There has been extensive research in the past twenty years devoted to a better understanding of the stable and other closely related infinitely divisible models. The late Professor Stamatis Cambanis, a distinguished educator and researcher, played a special leadership role in the development of these fields from the early seventies until his untimely death in April 1995. This commemorative volume honoring Stamatis Cambanis consists of a collection of research articles devoted to review the state of the art in rapidly developing research areas in Stochastic Processes and to explore new directions of research. The volume is a tribute to the life and work of Stamatis by his students, friends, and colleagues whose personal and professional lives he deeply touched through his generous insights and dedication to his profession.
This book focuses on recent advances, approaches, theories and applications related to mixture models. In particular, it presents recent unsupervised and semi-supervised frameworks that consider mixture models as their main tool. The chapters considers mixture models involving several interesting and challenging problems such as parameters estimation, model selection, feature selection, etc. The goal of this book is to summarize the recent advances and modern approaches related to these problems. Each contributor presents novel research, a practical study, or novel applications based on mixture models, or a survey of the literature. Reports advances on classic problems in mixture modeling such as parameter estimation, model selection, and feature selection; Present theoretical and practical developments in mixture-based modeling and their importance in different applications; Discusses perspectives and challenging future works related to mixture modeling.
This book provides a coherent framework for understanding shrinkage estimation in statistics. The term refers to modifying a classical estimator by moving it closer to a target which could be known a priori or arise from a model. The goal is to construct estimators with improved statistical properties. The book focuses primarily on point and loss estimation of the mean vector of multivariate normal and spherically symmetric distributions. Chapter 1 reviews the statistical and decision theoretic terminology and results that will be used throughout the book. Chapter 2 is concerned with estimating the mean vector of a multivariate normal distribution under quadratic loss from a frequentist perspective. In Chapter 3 the authors take a Bayesian view of shrinkage estimation in the normal setting. Chapter 4 introduces the general classes of spherically and elliptically symmetric distributions. Point and loss estimation for these broad classes are studied in subsequent chapters. In particular, Chapter 5 extends many of the results from Chapters 2 and 3 to spherically and elliptically symmetric distributions. Chapter 6 considers the general linear model with spherically symmetric error distributions when a residual vector is available. Chapter 7 then considers the problem of estimating a location vector which is constrained to lie in a convex set. Much of the chapter is devoted to one of two types of constraint sets, balls and polyhedral cones. In Chapter 8 the authors focus on loss estimation and data-dependent evidence reports. Appendices cover a number of technical topics including weakly differentiable functions; examples where Stein's identity doesn't hold; Stein's lemma and Stokes' theorem for smooth boundaries; harmonic, superharmonic and subharmonic functions; and modified Bessel functions.
With the diversification of Internet services and the increase in mobile users, efficient management of network resources has become an extremely important issue in the field of wireless communication networks (WCNs). Adaptive resource management is an effective tool for improving the economic efficiency of WCN systems as well as network design and construction, especially in view of the surge in mobile device demands. This book presents modelling methods based on queueing theory and Markov processes for a wide variety of WCN systems, as well as precise and approximate analytical solution methods for the numerical evaluation of the system performance. This is the first book to provide an overview of the numerical analyses that can be gleaned by applying queueing theory, traffic theory and other analytical methods to various WCN systems. It also discusses the recent advances in the resource management of WCNs, such as broadband wireless access networks, cognitive radio networks, and green cloud computing. It assumes a basic understanding of computer networks and queueing theory, and familiarity with stochastic processes is also recommended. The analysis methods presented in this book are useful for first-year-graduate or senior computer science and communication engineering students. Providing information on network design and management, performance evaluation, queueing theory, game theory, intelligent optimization, and operations research for researchers and engineers, the book is also a valuable reference resource for students, analysts, managers and anyone in the industry interested in WCN system modelling, performance analysis and numerical evaluation.
This book discusses risk management, product pricing, capital management and Return on Equity comprehensively and seamlessly. Strategic planning, including the required quantitative methods, is an essential part of bank management and control. A thorough introduction to the advanced methods of risk management for Credit Risk, Counterparty Credit Risk, Market Risk, Operational Risk and Risk Aggregation is provided. In addition, directly applicable concepts and data such as macroeconomic scenarios for strategic planning and stress testing as well as detailed scenarios for Operational Risk and advanced concepts for Credit Risk are presented in straightforward language. The book highlights the implications and chances of the Basel III and Basel IV implementations (2022 onwards), especially in terms of capital management and Return on Equity. A wealth of essential background information from practice, international observations and comparisons, along with numerous illustrative examples, make this book a useful resource for established and future professionals in bank management, risk management, capital management, controlling and accounting.
Developed for the new International A Level specification, these new resources are specifically designed for international students, with a strong focus on progression, recognition and transferable skills, allowing learning in a local context to a global standard. Recognised by universities worldwide and fully comparable to UK reformed GCE A levels. Supports a modular approach, in line with the specification. Appropriate international content puts learning in a real-world context, to a global standard, making it engaging and relevant for all learners. Reviewed by a language specialist to ensure materials are written in a clear and accessible style. The embedded transferable skills, needed for progression to higher education and employment, are signposted so students understand what skills they are developing and therefore go on to use these skills more effectively in the future. Exam practice provides opportunities to assess understanding and progress, so students can make the best progress they can.
This books presents some of the most recent and advanced statistical methods used to analyse environmental and climate data, and addresses the spatial and spatio-temporal dimensions of the phenomena studied, the multivariate complexity of the data, and the necessity of considering uncertainty sources and propagation. The topics covered include: detecting disease clusters, analysing harvest data, change point detection in ground-level ozone concentration, modelling atmospheric aerosol profiles, predicting wind speed, precipitation prediction and analysing spatial cylindrical data. The volume presents revised versions of selected contributions submitted at the joint TIES-GRASPA 2017 Conference on Climate and Environment, which was held at the University of Bergamo, Italy. As it is chiefly intended for researchers working at the forefront of statistical research in environmental applications, readers should be familiar with the basic methods for analysing spatial and spatio-temporal data.
This book covers methods of Mathematical Morphology to model and simulate random sets and functions (scalar and multivariate). The introduced models concern many physical situations in heterogeneous media, where a probabilistic approach is required, like fracture statistics of materials, scaling up of permeability in porous media, electron microscopy images (including multispectral images), rough surfaces, multi-component composites, biological tissues, textures for image coding and synthesis. The common feature of these random structures is their domain of definition in n dimensions, requiring more general models than standard Stochastic Processes.The main topics of the book cover an introduction to the theory of random sets, random space tessellations, Boolean random sets and functions, space-time random sets and functions (Dead Leaves, Sequential Alternate models, Reaction-Diffusion), prediction of effective properties of random media, and probabilistic fracture theories.
This book explores the concept of complementation in the adjectival domain of English grammar. Alternation between non-finite complements, especially to infinitives and gerundial complements, has been investigated intensively on the basis of large corpora in the last few years. With very few exceptions, however, such work has hitherto been based on univariate analysis methods. Using multivariate analysis, the authors present methodologically innovative case studies examining a large array of explanatory factors potentially impacting complement choice in cases of alternation. This approach yields more precise information on the impact of each factor on complement choice as well as on interactions between different explanatory factors. The book thus presents a methodologically new perspective on the study of the system of non-finite complementation in recent English and variation within that system, and will be relevant to academics and students with an interest in English grammar, predicate complementation, and statistical approaches to language.
This richly illustrated book provides an easy-to-read introduction to the challenges of organizing and integrating modern data worlds, explaining the contribution of public statistics and the ISO standard SDMX (Statistical Data and Metadata Exchange). As such, it is a must for data experts as well those aspiring to become one. Today, exponentially growing data worlds are increasingly determining our professional and private lives. The rapid increase in the amount of globally available data, fueled by search engines and social networks but also by new technical possibilities such as Big Data, offers great opportunities. But whatever the undertaking - driving the block chain revolution or making smart phones even smarter - success will be determined by how well it is possible to integrate, i.e. to collect, link and evaluate, the required data. One crucial factor in this is the introduction of a cross-domain order system in combination with a standardization of the data structure. Using everyday examples, the authors show how the concepts of statistics provide the basis for the universal and standardized presentation of any kind of information. They also introduce the international statistics standard SDMX, describing the profound changes it has made possible and the related order system for the international statistics community.
This book provides practical applications of doubly classified models by using R syntax to generate the models. It also presents these models in symbolic tables so as to cater to those who are not mathematically inclined, while numerous examples throughout the book illustrate the concepts and their applications. For those who are not aware of this modeling approach, it serves as a good starting point to acquire a basic understanding of doubly classified models. It is also a valuable resource for academics, postgraduate students, undergraduates, data analysts and researchers who are interested in examining square contingency tables.
Contains fully worked-out solutions to all of the odd-numbered exercises in the text, giving you a way to check your answers and ensure that you took the correct steps to arrive at an answer.
Statistical learning and analysis techniques have become extremely important today, given the tremendous growth in the size of heterogeneous data collections and the ability to process it even from physically distant locations. Recent advances made in the field of machine learning provide a strong framework for robust learning from the diverse corpora and continue to impact a variety of research problems across multiple scientific disciplines. The aim of this handbook is to familiarize beginners as well as experts with some of the recent techniques in this field. The Handbook is divided in two sections: Theory and
Applications, covering machine learning, data analytics,
biometrics, document recognition and security. emphasis on applications-oriented techniques
The book focuses on system dependability modeling and calculation, considering the impact of s-dependency and uncertainty. The best suited approaches for practical system dependability modeling and calculation, (1) the minimal cut approach, (2) the Markov process approach, and (3) the Markov minimal cut approach as a combination of (1) and (2) are described in detail and applied to several examples. The stringently used Boolean logic during the whole development process of the approaches is the key for the combination of the approaches on a common basis. For large and complex systems, efficient approximation approaches, e.g. the probable Markov path approach, have been developed, which can take into account s-dependencies be-tween components of complex system structures. A comprehensive analysis of aleatory uncertainty (due to randomness) and epistemic uncertainty (due to lack of knowledge), and their combination, developed on the basis of basic reliability indices and evaluated with the Monte Carlo simulation method, has been carried out. The uncertainty impact on system dependability is investigated and discussed using several examples with different levels of difficulty. The applications cover a wide variety of large and complex (real-world) systems. Actual state-of-the-art definitions of terms of the IEC 60050-192:2015 standard, as well as the dependability indices, are used uniformly in all six chapters of the book.
This book presents a unique collection of contributions on modern topics in statistics and econometrics, written by leading experts in the respective disciplines and their intersections. It addresses nonparametric statistics and econometrics, quantiles and expectiles, and advanced methods for complex data, including spatial and compositional data, as well as tools for empirical studies in economics and the social sciences. The book was written in honor of Christine Thomas-Agnan on the occasion of her 65th birthday. Given its scope, it will appeal to researchers and PhD students in statistics and econometrics alike who are interested in the latest developments in their field.
This book develops the theory of productivity measurement using the empirical index number approach. The theory uses multiplicative indices and additive indicators as measurement tools, instead of relying on the usual neo-classical assumptions, such as the existence of a production function characterized by constant returns to scale, optimizing behavior of the economic agents, and perfect foresight. The theory can be applied to all the common levels of aggregation (micro, meso, and macro), and half of the book is devoted to accounting for the links existing between the various levels. Basic insights from National Accounts are thereby used. The final chapter is devoted to the decomposition of productivity change into the contributions of efficiency change, technological change, scale effects, and input or output mix effects. Applications on real-life data demonstrate the empirical feasibility of the theory. The book is directed to a variety of overlapping audiences: statisticians involved in measuring productivity change; economists interested in growth accounting; researchers relating macro-economic productivity change to its industrial sources; enterprise micro-data researchers; and business analysts interested in performance measurement.
Statistics in Practice A new series of practical books outlining the use of statistical techniques in a wide range of application areas:
This book describes computational problems related to kernel density estimation (KDE) - one of the most important and widely used data smoothing techniques. A very detailed description of novel FFT-based algorithms for both KDE computations and bandwidth selection are presented. The theory of KDE appears to have matured and is now well developed and understood. However, there is not much progress observed in terms of performance improvements. This book is an attempt to remedy this. The book primarily addresses researchers and advanced graduate or postgraduate students who are interested in KDE and its computational aspects. The book contains both some background and much more sophisticated material, hence also more experienced researchers in the KDE area may find it interesting. The presented material is richly illustrated with many numerical examples using both artificial and real datasets. Also, a number of practical applications related to KDE are presented.
In this book, an integrated introduction to statistical inference is provided from a frequentist likelihood-based viewpoint. Classical results are presented together with recent developments, largely built upon ideas due to R.A. Fisher. The term "neo-Fisherian" highlights this.After a unified review of background material (statistical models, likelihood, data and model reduction, first-order asymptotics) and inference in the presence of nuisance parameters (including pseudo-likelihoods), a self-contained introduction is given to exponential families, exponential dispersion models, generalized linear models, and group families. Finally, basic results of higher-order asymptotics are introduced (index notation, asymptotic expansions for statistics and distributions, and major applications to likelihood inference).The emphasis is more on general concepts and methods than on regularity conditions. Many examples are given for specific statistical models. Each chapter is supplemented with problems and bibliographic notes. This volume can serve as a textbook in intermediate-level undergraduate and postgraduate courses in statistical inference.
This book presents a broad range of statistical techniques to address emerging needs in the field of repeated measures. It also provides a comprehensive overview of extensions of generalized linear models for the bivariate exponential family of distributions, which represent a new development in analysing repeated measures data. The demand for statistical models for correlated outcomes has grown rapidly recently, mainly due to presence of two types of underlying associations: associations between outcomes, and associations between explanatory variables and outcomes. The book systematically addresses key problems arising in the modelling of repeated measures data, bearing in mind those factors that play a major role in estimating the underlying relationships between covariates and outcome variables for correlated outcome data. In addition, it presents new approaches to addressing current challenges in the field of repeated measures and models based on conditional and joint probabilities. Markov models of first and higher orders are used for conditional models in addition to conditional probabilities as a function of covariates. Similarly, joint models are developed using both marginal-conditional probabilities as well as joint probabilities as a function of covariates. In addition to generalized linear models for bivariate outcomes, it highlights extended semi-parametric models for continuous failure time data and their applications in order to include models for a broader range of outcome variables that researchers encounter in various fields. The book further discusses the problem of analysing repeated measures data for failure time in the competing risk framework, which is now taking on an increasingly important role in the field of survival analysis, reliability and actuarial science. Details on how to perform the analyses are included in each chapter and supplemented with newly developed R packages and functions along with SAS codes and macro/IML. It is a valuable resource for researchers, graduate students and other users of statistical techniques for analysing repeated measures data. |
![]() ![]() You may like...
Tree-based Heterogeneous FPGA…
Umer Farooq, Zied Marrakchi, …
Hardcover
R3,025
Discovery Miles 30 250
Applications of Big Data in Large- and…
Sam Goundar, Praveen Kumar Rayani
Hardcover
R7,586
Discovery Miles 75 860
Novel Approaches to Information Systems…
Naveen Prakash, Deepika Prakash
Hardcover
R6,759
Discovery Miles 67 590
Graphics Shaders - Theory and Practice…
Mike Bailey, Steve Cunningham
Hardcover
R2,864
Discovery Miles 28 640
Energy Efficient High Performance…
Jawad Haj-Yahya, Avi Mendelson, …
Hardcover
R4,244
Discovery Miles 42 440
Computer Vision for X-Ray Testing…
Domingo Mery, Christian Pieringer
Hardcover
R2,250
Discovery Miles 22 500
|