![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
The field of statistics not only affects all areas of scientific activity, but also many other matters such as public policy. It is branching rapidly into so many different subjects that a series of handbooks is the only way of comprehensively presenting the various aspects of statistical methodology, applications, and recent developments. The "Handbook of Statistics" is a series of self-contained reference books. Each volume is devoted to a particular topic in statistics, with Volume 30 dealing with time series. The series is addressed to the entire community of statisticians and scientists in various disciplines who use statistical methodology in their work. At the same time, special emphasis is placed on applications-oriented techniques, with the applied statistician in mind as the primary audience. Comprehensively presents the various aspects of statistical methodologyDiscusses a wide variety of diverse applications and recent developmentsContributors are internationally renowened experts in their respective areas
Students in the sciences, economics, social sciences, and medicine take an introductory statistics course. And yet statistics can be notoriously difficult for instructors to teach and for students to learn. To help overcome these challenges, Gelman and Nolan have put together this fascinating and thought-provoking book. Based on years of teaching experience the book provides a wealth of demonstrations, activities, examples, and projects that involve active student participation. Part I of the book presents a large selection of activities for introductory statistics courses and has chapters such as 'First week of class'- with exercises to break the ice and get students talking; then descriptive statistics, graphics, linear regression, data collection (sampling and experimentation), probability, inference, and statistical communication. Part II gives tips on what works and what doesn't, how to set up effective demonstrations, how to encourage students to participate in class and to work effectively in group projects. Course plans for introductory statistics, statistics for social scientists, and communication and graphics are provided. Part III presents material for more advanced courses on topics such as decision theory, Bayesian statistics, sampling, and data science.
Written for students with basic experience in college algebra and applied calculus, Fundamentals of Statistical Thinking: Tools and Applications familiarizes readers with fundamental concepts in statistical thinking in order to prepare them for specialized management courses such as econometrics and quantitative analysis. The book is organized into four sections, each of which focuses on a common tool used in application. Chapters 1 through 4 discuss data analysis and summaries, with an emphasis on descriptive statistics and visualization. In Chapters 5 through 8 students learn about probability models and sampling distributions. Chapters 9 and 10 deal with statistical inferences, while Chapters 11 and 12 provide further applications for categorical data and simple linear regression models. Graphical illustrations support the written text and each chapter concludes with a visual summary. Rooted in over ten years of classroom experience at both the undergraduate and graduate levels, Fundamentals of Statistical Thinking helps readers understand the importance of the main technical tools of statistical decision making, and explains when they can most appropriately be used for applied studies.
This accessible reference includes selected contributions from Bayesian Thinking - Modeling and Computation, Volume 25 in the Handbook of Statistics Series, with a focus on key methodologies and applications for Bayesian models and computation. It describes parametric and nonparametric Bayesian methods for modeling, and how to use modern computational methods to summarize inferences using simulation. The book covers a wide range of topics including objective and subjective Bayesian inferences, with a variety of applications in modeling categorical, survival, spatial, spatiotemporal, Epidemiological, small area and micro array data. Aids critical thinking on causal effects
This book describes methods for statistical brain imaging data analysis from both the perspective of methodology and from the standpoint of application for software implementation in neuroscience research. These include those both commonly used (traditional established) and state of the art methods. The former is easier to do due to the availability of appropriate software. To understand the methods it is necessary to have some mathematical knowledge which is explained in the book with the help of figures and descriptions of the theory behind the software. In addition, the book includes numerical examples to guide readers on the working of existing popular software. The use of mathematics is reduced and simplified for non-experts using established methods, which also helps in avoiding mistakes in application and interpretation. Finally, the book enables the reader to understand and conceptualize the overall flow of brain imaging data analysis, particularly for statisticians and data-scientists unfamiliar with this area. The state of the art method described in the book has a multivariate approach developed by the authors' team. Since brain imaging data, generally, has a highly correlated and complex structure with large amounts of data, categorized into big data, the multivariate approach can be used as dimension reduction by following the application of statistical methods. The R package for most of the methods described is provided in the book. Understanding the background theory is helpful in implementing the software for original and creative applications and for an unbiased interpretation of the output. The book also explains new methods in a conceptual manner. These methodologies and packages are commonly applied in life science data analysis. Advanced methods to obtain novel insights are introduced, thereby encouraging the development of new methods and applications for research into medicine as a neuroscience.
This book discusses diverse concepts and notions - and their applications - concerning probability and random variables at the intermediate to advanced level. It explains basic concepts and results in a clearer and more complete manner than the extant literature. In addition to a range of concepts and notions concerning probability and random variables, the coverage includes a number of key advanced concepts in mathematics. Readers will also find unique results on e.g. the explicit general formula of joint moments and the expected values of nonlinear functions for normal random vectors. In addition, interesting applications of the step and impulse functions in discussions on random vectors are presented. Thanks to a wealth of examples and a total of 330 practice problems of varying difficulty, readers will have the opportunity to significantly expand their knowledge and skills. The book is rounded out by an extensive index, allowing readers to quickly and easily find what they are looking for. Given its scope, the book will appeal to all readers with a basic grasp of probability and random variables who are looking to go one step further. It also offers a valuable reference guide for experienced scholars and professionals, helping them review and refine their expertise.
Praise for the First Edition " . . . an excellent addition to an upper-level undergraduate
course on environmental statistics, and . . . a 'must-have' desk
reference for environmental practitioners dealing with censored
datasets." Statistical Methods for Censored Environmental Data Using Minitab(R) and R, Second Edition introduces and explains methods for analyzing and interpreting censored data in the environmental sciences. Adapting survival analysis techniques from other fields, the book translates well-established methods from other disciplines into new solutions for environmental studies. This new edition applies methods of survival analysis, including methods for interval-censored data to the interpretation of low-level contaminants in environmental sciences and occupational health. Now incorporating the freely available R software as well as Minitab(R) into the discussed analyses, the book features newly developed and updated material including: A new chapter on multivariate methods for censored data Use of interval-censored methods for treating true nondetects as lower than and separate from values between the detection and quantitation limits ("remarked data") A section on summing data with nondetects A newly written introduction that discusses invasive data, showing why substitution methods fail Expanded coverage of graphical methods for censored data The author writes in a style that focuses on applications rather than derivations, with chapters organized by key objectives such as computing intervals, comparing groups, and correlation. Examples accompany each procedure, utilizing real-world data that can be analyzed using the Minitab(R) and R software macros available on the book's related website, and extensive references direct readers to authoritative literature from the environmental sciences. Statistics for Censored Environmental Data Using Minitab(R) and R, Second Edition is an excellent book for courses on environmental statistics at the upper-undergraduate and graduate levels. The book also serves as a valuable reference for?environmental professionals, biologists, and ecologists who focus on the water sciences, air quality, and soil science.
Spaces of homogeneous type were introduced as a generalization to the Euclidean space and serve as a suffi cient setting in which one can generalize the classical isotropic Harmonic analysis and function space theory. This setting is sometimes too general, and the theory is limited. Here, we present a set of fl exible ellipsoid covers of n that replace the Euclidean balls and support a generalization of the theory with fewer limitations.
This book shows that research contributions from different fields-finance, economics, computer sciences, and physics-can provide useful insights into key issues in financial and cryptocurrency markets. Presenting the latest empirical and theoretical advances, it helps readers gain a better understanding of financial markets and cryptocurrencies. Bitcoin was the first cryptocurrency to use a peer-to-peer network to prevent double-spending and to control its issue without the need for a central authority, and it has attracted wide public attention since its introduction. In recent years, the academic community has also started gaining interest in cyptocurrencies, and research in the field has grown rapidly. This book presents is a collection of the latest work on cryptocurrency markets and the properties of those markets. This book will appeal to graduate students and researchers from disciplines such as finance, economics, financial engineering, computer science, physics and applied mathematics working in the field of financial markets, including cryptocurrency markets.
This new handbook contains the most comprehensive account of sample surveys theory and practice to date. It is a second volume on sample surveys, with the goal of updating and extending the sampling volume published as volume 6 of the Handbook of Statistics in 1988. The present handbook is divided into two volumes (29A and 29B), with a total of 41 chapters, covering current developments in almost every aspect of sample surveys, with references to important contributions and available software. It can serve as a self contained guide to researchers and practitioners, with appropriate balance between theory and real life applications. Each of the two volumes is divided into three parts, with each
part preceded by an introduction, summarizing the main developments
in the areas covered in that part. Volume29A deals with methods of
sample selection and data processing, with the later including
editing and imputation, handling of outliers and measurement
errors, and methods of disclosure control. The volume contains also
a large variety of applications in specialized areas such as
household and business surveys, marketing research, opinion polls
and censuses. Volume29B is concerned with inference, distinguishing
between design-based and model-based methods and focusing on
specific problems such as small area estimation, analysis of
longitudinal data, categorical data analysis and inference on
distribution functions. The volume contains also chapters dealing
with case-control studies, asymptotic properties of estimators and
decision theoretic aspects.
This new handbook contains the most comprehensive account of sample surveys theory and practice to date. It is a second volume on sample surveys, with the goal of updating and extending the sampling volume published as volume 6 of the Handbook of Statistics in 1988. The present handbook is divided into two volumes (29A and 29B), with a total of 41 chapters, covering current developments in almost every aspect of sample surveys, with references to important contributions and available software. It can serve as a self contained guide to researchers and practitioners, with appropriate balance between theory and real life applications. Each of the two volumes is divided into three parts, with each
part preceded by an introduction, summarizing the main developments
in the areas covered in that part. Volume 1 deals with methods of
sample selection and data processing, with the later including
editing and imputation, handling of outliers and measurement
errors, and methods of disclosure control. The volume contains also
a large variety of applications in specialized areas such as
household and business surveys, marketing research, opinion polls
and censuses. Volume 2 is concerned with inference, distinguishing
between design-based and model-based methods and focusing on
specific problems such as small area estimation, analysis of
longitudinal data, categorical data analysis and inference on
distribution functions. The volume contains also chapters dealing
with case-control studies, asymptotic properties of estimators and
decision theoretic aspects. Comprehensive account of recent developments in sample survey theory and practice Covers a wide variety of diverse applications Comprehensive bibliography
Bayesian analysis has developed rapidly in applications in the last
two decades and research in Bayesian methods remains dynamic and
fast-growing. Dramatic advances in modelling concepts and
computational technologies now enable routine application of
Bayesian analysis using increasingly realistic stochastic models,
and this drives the adoption of Bayesian approaches in many areas
of science, technology, commerce, and industry.
This book includes discussions related to solutions of such tasks as: probabilistic description of the investment function; recovering the income function from GDP estimates; development of models for the economic cycles; selecting the time interval of pseudo-stationarity of cycles; estimating characteristics/parameters of cycle models; analysis of accuracy of model factors. All of the above constitute the general principles of a theory explaining the phenomenon of economic cycles and provide mathematical tools for their quantitative description. The introduced theory is applicable to macroeconomic analyses as well as econometric estimations of economic cycles.
Congruences are ubiquitous in computer science, engineering, mathematics, and related areas. Developing techniques for finding (the number of) solutions of congruences is an important problem. But there are many scenarios in which we are interested in only a subset of the solutions; in other words, there are some restrictions. What do we know about these restricted congruences, their solutions, and applications? This book introduces the tools that are needed when working on restricted congruences and then systematically studies a variety of restricted congruences. Restricted Congruences in Computing defines several types of restricted congruence, obtains explicit formulae for the number of their solutions using a wide range of tools and techniques, and discusses their applications in cryptography, information security, information theory, coding theory, string theory, quantum field theory, parallel computing, artificial intelligence, computational biology, discrete mathematics, number theory, and more. This is the first book devoted to restricted congruences and their applications. It will be of interest to graduate students and researchers across computer science, electrical engineering, and mathematics.
Peter Goos, Department of Statistics, University of Leuven, Faculty of Bio-Science Engineering and University of Antwerp, Faculty of Applied Economics, Belgium David Meintrup, Department of Mathematics and Statistics, University of Applied Sciences Ingolstadt, Faculty of Mechanical Engineering, Germany Thorough presentation of introductory statistics and probability theory, with numerous examples and applications using JMP JMP: Graphs, Descriptive Statistics and Probability provides an accessible and thorough overview of the most important descriptive statistics for nominal, ordinal and quantitative data with particular attention to graphical representations. The authors distinguish their approach from many modern textbooks on descriptive statistics and probability theory by offering a combination of theoretical and mathematical depth, and clear and detailed explanations of concepts. Throughout the book, the user-friendly, interactive statistical software package JMP is used for calculations, the computation of probabilities and the creation of figures. The examples are explained in detail, and accompanied by step-by-step instructions and screenshots. The reader will therefore develop an understanding of both the statistical theory and its applications. Traditional graphs such as needle charts, histograms and pie charts are included, as well as the more modern mosaic plots, bubble plots and heat maps. The authors discuss probability theory, particularly discrete probability distributions and continuous probability densities, including the binomial and Poisson distributions, and the exponential, normal and lognormal densities. They use numerous examples throughout to illustrate these distributions and densities. Key features: * Introduces each concept with practical examples and demonstrations in JMP. * Provides the statistical theory including detailed mathematical derivations. * Presents illustrative examples in each chapter accompanied by step-by-step instructions and screenshots to help develop the reader s understanding of both the statistical theory and its applications. * A supporting website with data sets and other teaching materials. This book is equally aimed at students in engineering, economics and natural sciences who take classes in statistics as well as at masters/advanced students in applied statistics and probability theory. For teachers of applied statistics, this book provides a rich resource of course material, examples and applications.
This monograph presents mathematical theory of statistical models
described by the essentially large number of unknown parameters,
comparable with sample size but can also be much larger. In this
meaning, the proposed theory can be called "essentially
multiparametric." It is developed on the basis of the Kolmogorov
asymptotic approach in which sample size increases along with the
number of unknown parameters.
This book takes a fresh look at the popular and well-established method of maximum likelihood for statistical estimation and inference. It begins with an intuitive introduction to the concepts and background of likelihood, and moves through to the latest developments in maximum likelihood methodology, including general latent variable models and new material for the practical implementation of integrated likelihood using the free ADMB software. Fundamental issues of statistical inference are also examined, with a presentation of some of the philosophical debates underlying the choice of statistical paradigm. Key features: * Provides an accessible introduction to pragmatic maximum likelihood modelling. * Covers more advanced topics, including general forms of latent variable models (including non-linear and non-normal mixed-effects and state-space models) and the use of maximum likelihood variants, such as estimating equations, conditional likelihood, restricted likelihood and integrated likelihood. * Adopts a practical approach, with a focus on providing the relevant tools required by researchers and practitioners who collect and analyze real data. * Presents numerous examples and case studies across a wide range of applications including medicine, biology and ecology. * Features applications from a range of disciplines, with implementation in R, SAS and/or ADMB. * Provides all program code and software extensions on a supporting website. * Confines supporting theory to the final chapters to maintain a readable and pragmatic focus of the preceding chapters. This book is not just an accessible and practical text about maximum likelihood, it is a comprehensive guide to modern maximum likelihood estimation and inference. It will be of interest to readers of all levels, from novice to expert. It will be of great benefit to researchers, and to students of statistics from senior undergraduate to graduate level. For use as a course text, exercises are provided at the end of each chapter.
An introduction to the mathematical theory and financial models developed and used on Wall Street Providing both a theoretical and practical approach to the underlying mathematical theory behind financial models, Measure, Probability, and Mathematical Finance: A Problem-Oriented Approach presents important concepts and results in measure theory, probability theory, stochastic processes, and stochastic calculus. Measure theory is indispensable to the rigorous development of probability theory and is also necessary to properly address martingale measures, the change of numeraire theory, and LIBOR market models. In addition, probability theory is presented to facilitate the development of stochastic processes, including martingales and Brownian motions, while stochastic processes and stochastic calculus are discussed to model asset prices and develop derivative pricing models. The authors promote a problem-solving approach when applying mathematics in real-world situations, and readers are encouraged to address theorems and problems with mathematical rigor. In addition, Measure, Probability, and Mathematical Finance features: * A comprehensive list of concepts and theorems from measure theory, probability theory, stochastic processes, and stochastic calculus * Over 500 problems with hints and select solutions to reinforce basic concepts and important theorems * Classic derivative pricing models in mathematical finance that have been developed and published since the seminal work of Black and Scholes Measure, Probability, and Mathematical Finance: A Problem-Oriented Approach is an ideal textbook for introductory quantitative courses in business, economics, and mathematical finance at the upper-undergraduate and graduate levels. The book is also a useful reference for readers who need to build their mathematical skills in order to better understand the mathematical theory of derivative pricing models. |
You may like...
|