![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
'Et moi *...* si j'avait su comment en rcvenir. One service mathematics has rendered the je n'y serais point alle.' human race. It has put common sense back Jules Verne where it belongs, on the topmost shelf next to the dusty canistcr labelled 'discarded non- sense'. The scries is divergent; therefore we may be Eric T. Bell able to do something with it. O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non- linearities abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathematical physics ...'; 'One service logic has rendered com- puter science ...'; 'One service category theory has rendered mathematics ...'. All arguably true. And all statements obtainable this way form part of the raison d'etre of this series.
This book is an English translation of the last French edition of Bourbaki’s Fonctions d'une Variable Réelle. The first chapter is devoted to derivatives, Taylor expansions, the finite increments theorem, convex functions. In the second chapter, primitives and integrals (on arbitrary intervals) are studied, as well as their dependence with respect to parameters. Classical functions (exponential, logarithmic, circular and inverse circular) are investigated in the third chapter. The fourth chapter gives a thorough treatment of differential equations (existence and unicity properties of solutions, approximate solutions, dependence on parameters) and of systems of linear differential equations. The local study of functions (comparison relations, asymptotic expansions) is treated in chapter V, with an appendix on Hardy fields. The theory of generalized Taylor expansions and the Euler-MacLaurin formula are presented in the sixth chapter, and applied in the last one to the study of the Gamma function on the real line as well as on the complex plane. Although the topics of the book are mainly of an advanced undergraduate level, they are presented in the generality needed for more advanced purposes: functions allowed to take values in topological vector spaces, asymptotic expansions are treated on a filtered set equipped with a comparison scale, theorems on the dependence on parameters of differential equations are directly applicable to the study of flows of vector fields on differential manifolds, etc.
"Statistical Analysis of Management Data" provides a comprehensive approach to multivariate statistical analyses that are important for researchers in all fields of management, including finance, production, accounting, marketing, strategy, technology, and human resources. This book is especially designed to provide doctoral students with a theoretical knowledge of the concepts underlying the most important multivariate techniques and an overview of actual applications. It offers a clear, succinct exposition of each technique with emphasis on when each technique is appropriate and how to use it. This second edition, fully revised, updated, and expanded, reflects the most current evolution in the methods for data analysis in management and the social sciences. In particular, it places a greater emphasis on measurement models, and includes new chapters and sections on: confirmatory factor analysis canonical correlation analysis cluster analysis analysis of covariance structure multi-group confirmatory factor analysis and analysis of covariance structures. Featuring numerous examples, the book may serve as an advanced text or as a resource for applied researchers in industry who want to understand the foundations of the methods and to learn how they can be applied using widely available statistical software.
Data clustering is a highly interdisciplinary field, the goal of which is to divide a set of objects into homogeneous groups such that objects in the same group are similar and objects in different groups are quite distinct. Thousands of theoretical papers and a number of books on data clustering have been published over the past 50 years. However, few books exist to teach people how to implement data clustering algorithms. This book was written for anyone who wants to implement or improve their data clustering algorithms. Using object-oriented design and programming techniques, Data Clustering in C++ exploits the commonalities of all data clustering algorithms to create a flexible set of reusable classes that simplifies the implementation of any data clustering algorithm. Readers can follow the development of the base data clustering classes and several popular data clustering algorithms. Additional topics such as data pre-processing, data visualization, cluster visualization, and cluster interpretation are briefly covered. This book is divided into three parts-- * Data Clustering and C++ Preliminaries: A review of basic concepts of data clustering, the unified modeling language, object-oriented programming in C++, and design patterns * A C++ Data Clustering Framework: The development of data clustering base classes * Data Clustering Algorithms: The implementation of several popular data clustering algorithms A key to learning a clustering algorithm is to implement and experiment the clustering algorithm. Complete listings of classes, examples, unit test cases, and GNU configuration files are included in the appendices of this book as well as in the CD-ROM of the book. The only requirements to compile the code are a modern C++ compiler and the Boost C++ libraries.
The aim of this book is to provide a strong theoretical support for understanding and analyzing the behavior of evolutionary algorithms, as well as for creating a bridge between probability, set-oriented numerics and evolutionary computation. The volume encloses a collection of contributions that were presented at the EVOLVE 2011 international workshop, held in Luxembourg, May 25-27, 2011, coming from invited speakers and also from selected regular submissions. The aim of EVOLVE is to unify the perspectives offered by probability, set oriented numerics and evolutionary computation. EVOLVE focuses on challenging aspects that arise at the passage from theory to new paradigms and practice, elaborating on the foundations of evolutionary algorithms and theory-inspired methods merged with cutting-edge techniques that ensure performance guarantee factors. EVOLVE is also intended to foster a growing interest for robust and efficient methods with a sound theoretical background. The chapters enclose challenging theoretical findings, concrete optimization problems as well as new perspectives. By gathering contributions from researchers with different backgrounds, the book is expected to set the basis for a unified view and vocabulary where theoretical advancements may echo in different domains.
Principal component analysis is central to the study of multivariate data. Although one of the earliest multivariate techniques it continues to be the subject of much research, ranging from new model- based approaches to algorithmic ideas from neural networks. It is extremely versatile with applications in many disciplines. The first edition of this book was the first comprehensive text written solely on principal component analysis. The second edition updates and substantially expands the original version, and is once again the definitive text on the subject. It includes core material, current research and a wide range of applications. Its length is nearly double that of the first edition. Researchers in statistics, or in other fields that use principal component analysis, will find that the book gives an authoritative yet accessible account of the subject. It is also a valuable resource for graduate courses in multivariate analysis. The book requires some knowledge of matrix algebra. Ian Jolliffe is Professor of Statistics at the University of Aberdeen. He is author or co-author of over 60 research papers and three other books. His research interests are broad, but aspects of principal component analysis have fascinated him and kept him busy for over 30 years.
From the Foreword: The chief aim of this book is to present the reader with an integrated system of methods dealing with geographical statistics (geostatistics') and their applications. It sums up developments based on the vast experience accumulated by Professor Bachi over several decades of research. Interest in the quantitative locational aspects of geography and in the common ground of geography and statistics has grown rapidly, involving an ever-increasing spectrum of scientific disciplines ... the present volume will fill a genuine need - as a textbook, as a reference work, and as a practical aid for geographers, applied statisticians, demographers, ecologists, regional planners, economists, professional staff of official statistical agencies, and others.' - E. Peritz, G. Nathan, N. Kadmon.
Data mining is the process of extracting hidden patterns from data, and it's commonly used in business, bioinformatics, counter-terrorism, and, increasingly, in professional sports. First popularized in Michael Lewis' best-selling Moneyball: The Art of Winning An Unfair Game, it is has become an intrinsic part of all professional sports the world over, from baseball to cricket to soccer. While an industry has developed based on statistical analysis services for any given sport, or even for betting behavior analysis on these sports, no research-level book has considered the subject in any detail until now. Sports Data Mining brings together in one place the state of the art as it concerns an international array of sports: baseball, football, basketball, soccer, greyhound racing are all covered, and the authors (including Hsinchun Chen, one of the most esteemed and well-known experts in data mining in the world) present the latest research, developments, software available, and applications for each sport. They even examine the hidden patterns in gaming and wagering, along with the most common systems for wager analysis.
Today, a major component of any project management effort is the combined use of qualitative and quantitative tools. While publications on qualitative approaches to project management are widely available, few project management books have focused on the quantitative approaches. This book represents the first major project management book with a practical focus on the quantitative approaches to project management. The book organizes quantitative techniques into an integrated framework for project planning, scheduling, and control. Numerous illustrative examples are presented. Topics covered in the book include PERT/CPM/PDM and extensions, mathematical project scheduling, heuristic project scheduling, project economics, statistical data analysis for project planning, computer simulation, assignment and transportation problems, and learning curve analysis. Chapter one gives a brief overview of project management, presenting a general-purpose project management model. Chapter two covers CPM, PERT, and PDM network techniques. Chapter three covers project scheduling subject to resource constraints. Chapter four covers project optimization. Chapter five discusses economic analysis for project planning and control. Chapter six discusses learning curve analysis. Chapter seven covers statistical data analysis for project planning and control. Chapter eight presents techniques for project analysis and selection. Tables and figures are used throughout the book to enhance the effectiveness of the discussions. This book is excellent as a textbook for upper-level undergraduate and graduate courses in Industrial Engineering, Engineering Management, and Business, and as a detailed, comprehensive guidefor corporate management.
The evolution of systems in random media is a broad and fruitful field for the applica tions of different mathematical methods and theories. This evolution can be character ized by a semigroup property. In the abstract form, this property is given by a semigroup of operators in a normed vector (Banach) space. In the practically boundless variety of mathematical models of the evolutionary systems, we have chosen the semi-Markov ran dom evolutions as an object of our consideration. The definition of the evolutions of this type is based on rather simple initial assumptions. The random medium is described by the Markov renewal processes or by the semi Markov processes. The local characteristics of the system depend on the state of the ran dom medium. At the same time, the evolution of the system does not affect the medium. Hence, the semi-Markov random evolutions are described by two processes, namely, by the switching Markov renewal process, which describes the changes of the state of the external random medium, and by the switched process, i.e., by the semigroup of oper ators describing the evolution of the system in the semi-Markov random medium."
This reissue of D. A. Gillies highly influential work, first published in 1973, is a philosophical theory of probability which seeks to develop von Mises' views on the subject. In agreement with von Mises, the author regards probability theory as a mathematical science like mechanics or electrodynamics, and probability as an objective, measurable concept like force, mass or charge. On the other hand, Dr Gillies rejects von Mises' definition of probability in terms of limiting frequency and claims that probability should be taken as a primitive or undefined term in accordance with modern axiomatic approaches. This of course raises the problem of how the abstract calculus of probability should be connected with the actual world of experiments'. It is suggested that this link should be established, not by a definition of probability, but by an application of Popper's concept of falsifiability. In addition to formulating his own interesting theory, Dr Gillies gives a detailed criticism of the generally accepted Neyman Pearson theory of testing, as well as of alternative philosophical approaches to probability theory. The reissue will be of interest both to philosophers with no previous knowledge of probability theory and to mathematicians interested in the foundations of probability theory and statistics.
A recent development in SDC-related problems is the establishment of intelligent SDC models and the intensive use of LMI-based convex optimization methods. Within this theoretical framework, control parameter determination can be designed and stability and robustness of closed-loop systems can be analyzed. This book describes the new framework of SDC system design and provides a comprehensive description of the modelling of controller design tools and their real-time implementation. It starts with a review of current research on SDC and moves on to some basic techniques for modelling and controller design of SDC systems. This is followed by a description of controller design for fixed-control-structure SDC systems, PDF control for general input- and output-represented systems, filtering designs, and fault detection and diagnosis (FDD) for SDC systems. Many new LMI techniques being developed for SDC systems are shown to have independent theoretical significance for robust control and FDD problems.
This book covers several bases at once. It is useful as a textbook for a second course in experimental optimization techniques for industrial production processes. In addition, it is a superb reference volume for use by professors and graduate students in Industrial Engineering and Statistics departments. It will also be of huge interest to applied statisticians, process engineers, and quality engineers working in the electronics and biotech manufacturing industries. In all, it provides an in-depth presentation of the statistical issues that arise in optimization problems, including confidence regions on the optimal settings of a process, stopping rules in experimental optimization, and more.
Testing for a Unit Root is now an essential part of time series analysis but the literature on the topic is so large that knowing where to start is difficult even for the specialist. This book provides a way into the techniques of unit root testing, explaining the pitfalls and nonstandard cases, using practical examples and simulation analysis.
For surveys involving sensitive questions, randomized response techniques (RRTs) and other indirect questions are helpful in obtaining survey responses while maintaining the privacy of the respondents. Written by one of the leading experts in the world on RR, Randomized Response and Indirect Questioning Techniques in Surveys describes the current state of RR as well as emerging developments in the field. The author also explains how to extend RR to situations employing unequal probability sampling. While the theory of RR has grown phenomenally, the area has not kept pace in practice. Covering both theory and practice, the book first discusses replacing a direct response (DR) with an RR in a simple random sample with replacement (SRSWR). It then emphasizes how the application of RRTs in the estimation of attribute or quantitative features is valid for selecting respondents in a general manner. The author examines different ways to treat maximum likelihood estimation; covers optional RR devices, which provide alternatives to compulsory randomized response theory; and presents RR techniques that encompass quantitative variables, including those related to stigmatizing characteristics. He also gives his viewpoint on alternative RR techniques, including the item count technique, nominative technique, and three-card method.
This monograph deals with spatially dependent nonstationary time series in a way accessible to both time series econometricians wanting to understand spatial econometics, and spatial econometricians lacking a grounding in time series analysis. After charting key concepts in both time series and spatial econometrics, the book discusses how the spatial connectivity matrix can be estimated using spatial panel data instead of assuming it to be exogenously fixed. This is followed by a discussion of spatial nonstationarity in spatial cross-section data, and a full exposition of non-stationarity in both single and multi-equation contexts, including the estimation and simulation of spatial vector autoregression (VAR) models and spatial error correction (ECM) models. The book reviews the literature on panel unit root tests and panel cointegration tests for spatially independent data, and for data that are strongly spatially dependent. It provides for the first time critical values for panel unit root tests and panel cointegration tests when the spatial panel data are weakly or spatially dependent. The volume concludes with a discussion of incorporating strong and weak spatial dependence in non-stationary panel data models. All discussions are accompanied by empirical testing based on a spatial panel data of house prices in Israel.
This book presents a new branch of mathematical statistics aimed at constructing unimprovable methods of multivariate analysis, multi-parametric estimation, and discriminant and regression analysis. In contrast to the traditional consistent Fisher method of statistics, the essentially multivariate technique is based on the decision function approach by A. Wald. Developing this new method for high dimensions, comparable in magnitude with sample size, provides stable approximately unimprovable procedures in some wide classes, depending on an arbitrary function. A remarkable fact is established: for high-dimensional problems, under some weak restrictions on the variable dependence, the standard quality functions of regularized multivariate procedures prove to be independent of distributions. For the first time in the history of statistics, this opens the possibility to construct unimprovable procedures free from distributions. Audience: This work will be of interest to researchers and graduate students whose work involves statistics and probability, reliability and risk analysis, econometrics, machine learning, medical statistics, and various applications of multivariate analysis.
This book is devoted to the study of univariate distributions appropriate for the analyses of data known to be nonnegative. The book includes much material from reliability theory in engineering and survival analysis in medicine.
"Data Analysis with SPSS"is designed to teach students how to explore data in a systematic manner using the most popular professional social statistics program on the market today. Written in ten manageable chapters, this book first introduces students to the approach researchers use to frame research questions and the logic of establishing causal relations. Students are then oriented to the SPSS program and how to examine data sets. Subsequent chapters guide them through univariate analysis, bivariate analysis, graphic analysis, and multivariate analysis. Students conclude their course by learning how to write a research report and by engaging in their own research project. Each book is packaged with a disk containing the GSS (General Social Survey) file and the States data files. The GSS file contains 100 variables generated from interviews with 2,900 people, concerning their behaviors and attitudes on a wide variety of issues such as abortion, religion, prejudice, sexuality, and politics. The States data allows comparison of all 50 states with 400 variables indicating issues such as unemployment, environment, criminality, population, and education. Students will ultimately use these data to conduct their own independent research project with SPSS. Note: MySearchLab does not come automatically packaged with this text. To purchase MySearchLab, please visit: www.mysearchlab.com or you can purchase a ValuePack of the text + MySearchLab with Pearson eText (at no additional cost). ValuePack ISBN-10: 0205863728 / ValuePack ISBN-13: 9780205863723
First published in 2000. Routledge is an imprint of Taylor & Francis, an informa company.
This book provides a systematic in-depth analysis of nonparametric regression with random design. It covers almost all known estimates such as classical local averaging estimates including kernel, partitioning and nearest neighbor estimates, least squares estimates using splines, neural networks and radial basis function networks, penalized least squares estimates, local polynomial kernel estimates, and orthogonal series estimates. The emphasis is on distribution-free properties of the estimates. Most consistency results are valid for all distributions of the data. Whenever it is not possible to derive distribution-free results, as in the case of the rates of convergence, the emphasis is on results which require as few constrains on distributions as possible, on distribution-free inequalities, and on adaptation. The relevant mathematical theory is systematically developed and requires only a basic knowledge of probability theory. The book will be a valuable reference for anyone interested in nonparametric regression and is a rich source of many useful mathematical techniques widely scattered in the literature. In particular, the book introduces the reader to empirical process theory, martingales and approximation properties of neural networks.
"Statistical Modeling, Analysis and Management of Fuzzy Data," or SMFD for short, is an important contribution to a better understanding of a basic issue -an issue which has been controversial, and still is though to a lesser degree. In substance, the issue is: are fuzziness and randomness distinct or coextensive facets of uncertainty? Are the theories of fuzziness and random ness competitive or complementary? In SMFD, these and related issues are addressed with rigor, authority and insight by prominent contributors drawn, in the main, from probability theory, fuzzy set theory and data analysis com munities. First, a historical perspective. The almost simultaneous births -close to half a century ago-of statistically-based information theory and cybernetics were two major events which marked the beginning of the steep ascent of probability theory and statistics in visibility, influence and importance. I was a student when information theory and cybernetics were born, and what is etched in my memory are the fascinating lectures by Shannon and Wiener in which they sketched their visions of the coming era of machine intelligence and automation of reasoning and decision processes. What I heard in those lectures inspired one of my first papers (1950) "An Extension of Wiener's Theory of Prediction," and led to my life-long interest in probability theory and its applications to information processing, decision analysis and control."
While continuing to focus on methods of testing for two-sided equivalence, Testing Statistical Hypotheses of Equivalence and Noninferiority, Second Edition gives much more attention to noninferiority testing. It covers a spectrum of equivalence testing problems of both types, ranging from a one-sample problem with normally distributed observations of fixed known variance to problems involving several dependent or independent samples and multivariate data. Along with expanding the material on noninferiority problems, this edition includes new chapters on equivalence tests for multivariate data and tests for relevant differences between treatments. A majority of the computer programs offered online are now available not only in SAS or Fortran but also as R scripts or as shared objects that can be called within the R system. This book provides readers with a rich repertoire of efficient solutions to specific equivalence and noninferiority testing problems frequently encountered in the analysis of real data sets. It first presents general approaches to problems of testing for noninferiority and two-sided equivalence. Each subsequent chapter then focuses on a specific procedure and its practical implementation. The last chapter describes basic theoretical results about tests for relevant differences as well as solutions for some specific settings often arising in practice. Drawing from real-life medical research, the author uses numerous examples throughout to illustrate the methods.
'Et moi, ..., si. j'avail su comment en revenir. One service mathematics has rendered be human race. It has put common sense back jc n'y scrais point a1U: where it belongs, on the topmost sbelf next Jules Verne to \be dusty canister labelled 'discarded non- TIle series is divergent; therefore we may be sense'. able to do something with it Eric T. Bell O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non linearities abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathematical physics .. .'; 'One service logic bas rendered com puter science .. .'; 'One service category theory has rendered mathematics .. .'. All arguably true. And all statements obtainable this way form part of the raison d'etre of this series." |
You may like...
Studies in Inductive Logic and…
Rudolf Carnap, Richard C Jeffrey
Hardcover
R2,378
Discovery Miles 23 780
Statistics for Management and Economics
Gerald Keller, Nicoleta Gaciu
Paperback
Fundamentals of Social Research Methods
Claire Bless, Craig Higson-Smith, …
Paperback
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
|