Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Science & Mathematics > Mathematics > Probability & statistics
In this book, an integrated introduction to statistical inference is provided from a frequentist likelihood-based viewpoint. Classical results are presented together with recent developments, largely built upon ideas due to R.A. Fisher. The term "neo-Fisherian" highlights this.After a unified review of background material (statistical models, likelihood, data and model reduction, first-order asymptotics) and inference in the presence of nuisance parameters (including pseudo-likelihoods), a self-contained introduction is given to exponential families, exponential dispersion models, generalized linear models, and group families. Finally, basic results of higher-order asymptotics are introduced (index notation, asymptotic expansions for statistics and distributions, and major applications to likelihood inference).The emphasis is more on general concepts and methods than on regularity conditions. Many examples are given for specific statistical models. Each chapter is supplemented with problems and bibliographic notes. This volume can serve as a textbook in intermediate-level undergraduate and postgraduate courses in statistical inference.
Medical Risk Prediction Models: With Ties to Machine Learning is a hands-on book for clinicians, epidemiologists, and professional statisticians who need to make or evaluate a statistical prediction model based on data. The subject of the book is the patient's individualized probability of a medical event within a given time horizon. Gerds and Kattan describe the mathematical details of making and evaluating a statistical prediction model in a highly pedagogical manner while avoiding mathematical notation. Read this book when you are in doubt about whether a Cox regression model predicts better than a random survival forest. Features: All you need to know to correctly make an online risk calculator from scratch Discrimination, calibration, and predictive performance with censored data and competing risks R-code and illustrative examples Interpretation of prediction performance via benchmarks Comparison and combination of rival modeling strategies via cross-validation Thomas A. Gerds is a professor at the Biostatistics Unit at the University of Copenhagen and is affiliated with the Danish Heart Foundation. He is the author of several R-packages on CRAN and has taught statistics courses to non-statisticians for many years. Michael W. Kattan is a highly cited author and Chair of the Department of Quantitative Health Sciences at Cleveland Clinic. He is a Fellow of the American Statistical Association and has received two awards from the Society for Medical Decision Making: the Eugene L. Saenger Award for Distinguished Service, and the John M. Eisenberg Award for Practical Application of Medical Decision-Making Research.
Modern apparatuses allow us to collect samples of functional data, mainly curves but also images. On the other hand, nonparametric statistics produces useful tools for standard data exploration. This book links these two fields of modern statistics by explaining how functional data can be studied through parameter-free statistical ideas. At the same time it shows how functional data can be studied through parameter-free statistical ideas, and offers an original presentation of new nonparametric statistical methods for functional data analysis.
Generalizability theory offers an extensive conceptual framework and a powerful set of statistical procedures for characterizing and quantifying the fallibility of measurements. It liberalizes classical test theory, in part through the application of analysis of variance procedures that focus on variance components. As such, generalizability theory is perhaps the most broadly defined measurement model currently in existence. It is applicable to virtually any scientific field that attends to measurements and their errors, and it enables a multifacteted perspective on measurement error and its components. This book provides the most comprehensive and up-to-date treatment of generalizability theory. In addition, it provides a synthesis of those parts of the statistical literature that are directly applicable to generalizability theory. The principal intended audience is measurement practitioners and graduate students in the behavioral and social sciences, although a few examples and references are provided from other fields. Readers will benefit from some familiarity with classical test theory and analysis of variance, but the treatment of most topics does not presume specific background. Robert L. Brennan is E.F. Lindquist Professor of Educational Measurement at the University of Iowa. He is an acknowledged expert in generalizability theory, has authored numerous publications on the theory, and has taught many courses and workshops on generalizability. The author has been Vice-President of the American Educational Research Association and President of the National Council on Measurement in Education (NCME). He has received NCME Awards for Outstanding Technical Contributions to Educational Measurement and Career Contributions to Educational Measurement.
The author investigates athermal fluctuation from the viewpoints of statistical mechanics in this thesis. Stochastic methods are theoretically very powerful in describing fluctuation of thermodynamic quantities in small systems on the level of a single trajectory and have been recently developed on the basis of stochastic thermodynamics. This thesis proposes, for the first time, a systematic framework to describe athermal fluctuation, developing stochastic thermodynamics for non-Gaussian processes, while thermal fluctuations are mainly addressed from the viewpoint of Gaussian stochastic processes in most of the conventional studies. First, the book provides an elementary introduction to the stochastic processes and stochastic thermodynamics. The author derives a Langevin-like equation with non-Gaussian noise as a minimal stochastic model for athermal systems, and its analytical solution by developing systematic expansions is shown as the main result. Furthermore, the a uthor shows a thermodynamic framework for such non-Gaussian fluctuations, and studies some thermodynamics phenomena, i.e. heat conduction and energy pumping, which shows distinct characteristics from conventional thermodynamics. The theory introduced in the book would be a systematic foundation to describe dynamics of athermal fluctuation quantitatively and to analyze their thermodynamic properties on the basis of stochastic methods.
This book describes a system of mathematical models and methods that can be used to analyze real economic and managerial decisions and to improve their effectiveness. Application areas include: management of development and operation budgets, assessment and management of economic systems using an energy entropy approach, equation of exchange rates and forecasting foreign exchange operations, evaluation of innovative projects, monitoring of governmental programs, risk management of investment processes, decisions on the allocation of resources, and identification of competitive industrial clusters. The proposed methods and models were tested on the example of Kazakhstan's economy, but the generated solutions will be useful for applications at other levels and in other countries. Regarding your book "Mathematical Methods and Models in Economics", I am impressed because now it is time when "econometrics" is becoming more appreciated by economists and by schools that are the hosts or employers of modern economists. ... Your presented results really impressed me. John F. Nash, Jr., Princeton University, Nobel Memorial Prize in Economic Sciences The book is within my scope of interest because of its novelty and practicality. First, there is a need for realistic modeling of complex systems, both natural and artificial that conclude computer and economic systems. There has been an ongoing effort in developing models dealing with complexity and incomplete knowledge. Consequently, it is clear to recognize the contribution of Mutanov to encapsulate economic modeling with emphasis on budgeting and innovation. Secondly, the method proposed by Mutanov has been verified by applying to the case of the Republic of Kazakhstan, with her vibrant emerging economy. Thirdly, Chapter 5 of the book is of particular interest for the computer technology community because it deals with innovation. In summary, the book of Mutanov should become one of the outstanding recognized pragmatic guides for dealing with innovative systems. Andrzej Rucinski, University of New Hampshire This book is unique in its theoretical findings and practical applicability. The book is an illuminating study based on an applied mathematical model which uses methods such as linear programming and input-output analysis. Moreover, this work demonstrates the author's great insight and academic brilliance in the fields of finance, technological innovations and marketing vis-a-vis the market economy. From both theoretical and practical standpoint, this work is indeed a great achievement. Yeon Cheon Oh, President of Seoul National University
This is a new, completely revised, updated and enlarged edition of the author's Ergebnisse vol. 46: "Spin Glasses: A Challenge for Mathematicians" in two volumes (this is the 2nd volume). In the eighties, a group of theoretical physicists introduced several models for certain disordered systems, called "spin glasses." These models are simple and rather canonical random structures, of considerable interest for several branches of science (statistical physics, neural networks and computer science). The physicists studied them by non-rigorous methods and predicted spectacular behaviors. This book introduces in a rigorous manner this exciting new area to the mathematically minded reader. It requires no knowledge whatsoever of any physics. The present Volume II contains a considerable amount of new material, in particular all the fundamental low-temperature results obtained after the publication of the first edition.
For courses in introductory statistics. Classic, yet contemporary; theoretical, yet applied-McClave & Sincich's Statistics gives you the best of both worlds. This text offers a trusted, comprehensive introduction to statistics that emphasises inference and integrates real data throughout. The authors stress the development of statistical thinking, the assessment of credibility, and value of the inferences made from data. This edition is extensively revised with an eye on clearer, more concise language throughout the text and in the exercises. Ideal for one- or two-semester courses in introductory statistics, this text assumes a mathematical background of basic algebra. Flexibility is built in for instructors who teach a more advanced course, with optional footnotes about calculus and the underlying theory.
- The book discusses the recent techniques in NGS data analysis which is the most needed material by biologists (students and researchers) in the wake of numerous genomic projects and the trend toward genomic research. - The book includes both theory and practice for the NGS data analysis. So, readers will understand the concept and learn how to do the analysis using the most recent programs. - The steps of application workflows are written in a manner that can be followed for related projects. - Each chapter includes worked examples with real data available on the NCBI databases. Programming codes and outputs are accompanied with explanation. - The book content is suitable as teaching material for biology and bioinformatics students. Meets the requirements of a complete semester course on Sequencing Data Analysis Covers the latest applications for Next Generation Sequencing Covers data reprocessing, genome assembly, variant discovery, gene profiling, epigenetics, and metagenomics
Statistical science as organized in formal academic departments is relatively new. With a few exceptions, most Statistics and Biostatistics departments have been created within the past 60 years. This book consists of a set of memoirs, one for each department in the U.S. created by the mid-1960s. The memoirs describe key aspects of the department s history -- its founding, its growth, key people in its development, success stories (such as major research accomplishments) and the occasional failure story, PhD graduates who have had a significant impact, its impact on statistical education, and a summary of where the department stands today and its vision for the future. Read here all about how departments such as at Berkeley, Chicago, Harvard, and Stanford started and how they got to where they are today. The book should also be of interests to scholars in the field of disciplinary history. "
This volume presents 27 selected papers in topics that range from statistical applications in business and finance to applications in clinical trials and biomarker analysis. All papers feature original, peer-reviewed content. The editors intentionally selected papers that cover many topics so that the volume will serve the whole statistical community and a variety of research interests. The papers represent select contributions to the 21st ICSA Applied Statistics Symposium. The International Chinese Statistical Association (ICSA) Symposium took place between the 23rd and 26th of June, 2012 in Boston, Massachusetts. It was co-sponsored by the International Society for Biopharmaceutical Statistics (ISBS) and American Statistical Association (ASA). This is the inaugural proceedings volume to share research from the ICSA Applied Statistics Symposium.
This book presents current research on Ulam stability for functional equations and inequalities. Contributions from renowned scientists emphasize fundamental and new results, methods and techniques. Detailed examples are given to theories to further understanding at the graduate level for students in mathematics, physics, and engineering. Key topics covered in this book include: Quasi means Approximate isometries Functional equations in hypergroups Stability of functional equations Fischer-Muszely equation Haar meager sets and Haar null sets Dynamical systems Functional equations in probability theory Stochastic convex ordering Dhombres functional equation Nonstandard analysis and Ulam stability This book is dedicated in memory of Stanilsaw Marcin Ulam, who posed the fundamental problem concerning approximate homomorphisms of groups in 1940; which has provided the stimulus for studies in the stability of functional equations and inequalities.
The domain of non-extensive thermostatistics has been subject to intensive research over the past twenty years and has matured significantly. Generalised Thermostatistics cuts through the traditionalism of many statistical physics texts by offering a fresh perspective and seeking to remove elements of doubt and confusion surrounding the area. The book is divided into two parts - the first covering topics from conventional statistical physics, whilst adopting the perspective that statistical physics is statistics applied to physics. The second developing the formalism of non-extensive thermostatistics, of which the central role is played by the notion of a deformed exponential family of probability distributions. Presented in a clear, consistent, and deductive manner, the book focuses on theory, part of which is developed by the author himself, but also provides a number of references towards application-based texts. Written by a leading contributor in the field, this book will provide a useful tool for learning about recent developments in generalized versions of statistical mechanics and thermodynamics, especially with respect to self-study. Written for researchers in theoretical physics, mathematics and statistical mechanics, as well as graduates of physics, mathematics or engineering. A prerequisite knowledge of elementary notions of statistical physics and a substantial mathematical background are required.
This volume presents selections of Peter J. Bickel's major papers, along with comments on their novelty and impact on the subsequent development of statistics as a discipline. Each of the eight parts concerns a particular area of research and provides new commentary by experts in the area. The parts range from Rank-Based Nonparametrics to Function Estimation and Bootstrap Resampling. Peter's amazing career encompasses the majority of statistical developments in the last half-century or about about half of the entire history of the systematic development of statistics. This volume shares insights on these exciting statistical developments with future generations of statisticians. The compilation of supporting material about Peter's life and work help readers understand the environment under which his research was conducted. The material will also inspire readers in their own research-based pursuits. This volume includes new photos of Peter Bickel, his biography, publication list, and a list of his students. These give the reader a more complete picture of Peter Bickel as a teacher, a friend, a colleague, and a family man.
The focus of this book is on the birth and historical development of permutation statistical methods from the early 1920s to the near present. Beginning with the seminal contributions of R.A. Fisher, E.J.G. Pitman, and others in the 1920s and 1930s, permutation statistical methods were initially introduced to validate the assumptions of classical statistical methods. Permutation methods have advantages over classical methods in that they are optimal for small data sets and non-random samples, are data-dependent, and are free of distributional assumptions. Permutation probability values may be exact, or estimated via moment- or resampling-approximation procedures. Because permutation methods are inherently computationally-intensive, the evolution of computers and computing technology that made modern permutation methods possible accompanies the historical narrative. Permutation analogs of many well-known statistical tests are presented in a historical context, including multiple correlation and regression, analysis of variance, contingency table analysis, and measures of association and agreement. A non-mathematical approach makes the text accessible to readers of all levels.
This book presents practical approaches for the analysis of data from gene expression microarrays. Each chapter describes the conceptual and methodological underpinning for a statistical tool and its implementation in software. Methods cover all aspects of statistical analysis of microarrays, from annotation and filtering to clustering and classification. Chapters are written by the developers of the software. All software packages described are free to academic users. The book includes coverage of various packages that are part of the Bioconductor project and several related R tools. The materials presented cover a range of software tools designed for varied audiences. Some chapters describe simple menu-driven software in a user-friendly fashion, and are designed to be accessible to microarray data analysts without formal quantitative training. Most chapters are directed at microarray data analysts with master-level training in computer science, biostatistics or bioinformatics. A minority of more advanced chapters are intended for doctoral students and researchers. The team of editors is from the Johns Hopkins Schools of Medicine and Public Health and has been involved with developing methods and software for microarray data analysis since the inception of this technology. Giovanni Parmigiani is Associate Professor of Oncology, Pathology and Biostatistics. He is the author of the book on "Modeling in Medical decision Making," a fellow of the ASA, and a recipient of the Savage Awards for Bayesian statistics. Elizabeth S. Garrett is Assistant Professor of Oncology and Biostatistics, and recipient of the Abbey Award for statistical education. Rafael A Irizarry is Assistant Professor of Biostatistics, and recipient of the Noether Award for non-parametric statistics. Scott L. Zeger is Professor and chair of Biostatistics. He is co-author of the book "Longitudinal Data Analysis," a fellow of the ASA and recipient of the Spiegelman Award for public health statistics.
This thoroughly updated second edition combines the latest software applications with the benefits of modern resampling techniques Resampling helps students understand the meaning of sampling distributions, sampling variability, P-values, hypothesis tests, and confidence intervals. The second edition of Mathematical Statistics with Resampling and R combines modern resampling techniques and mathematical statistics. This book has been classroom-tested to ensure an accessible presentation, uses the powerful and flexible computer language R for data analysis and explores the benefits of modern resampling techniques. This book offers an introduction to permutation tests and bootstrap methods that can serve to motivate classical inference methods. The book strikes a balance between theory, computing, and applications, and the new edition explores additional topics including consulting, paired t test, ANOVA and Google Interview Questions. Throughout the book, new and updated case studies are included representing a diverse range of subjects such as flight delays, birth weights of babies, and telephone company repair times. These illustrate the relevance of the real-world applications of the material. This new edition: - Puts the focus on statistical consulting that emphasizes giving a client an understanding of data and goes beyond typical expectations - Presents new material on topics such as the paired t test, Fisher's Exact Test and the EM algorithm - Offers a new section on "Google Interview Questions" that illustrates statistical thinking - Provides a new chapter on ANOVA - Contains more exercises and updated case studies, data sets, and R code Written for undergraduate students in a mathematical statistics course as well as practitioners and researchers, the second edition of Mathematical Statistics with Resampling and R presents a revised and updated guide for applying the most current resampling techniques to mathematical statistics.
This book is devoted to parameter estimation in diffusion models involving fractional Brownian motion and related processes. For many years now, standard Brownian motion has been (and still remains) a popular model of randomness used to investigate processes in the natural sciences, financial markets, and the economy. The substantial limitation in the use of stochastic diffusion models with Brownian motion is due to the fact that the motion has independent increments, and, therefore, the random noise it generates is "white," i.e., uncorrelated. However, many processes in the natural sciences, computer networks and financial markets have long-term or short-term dependences, i.e., the correlations of random noise in these processes are non-zero, and slowly or rapidly decrease with time. In particular, models of financial markets demonstrate various kinds of memory and usually this memory is modeled by fractional Brownian diffusion. Therefore, the book constructs diffusion models with memory and provides simple and suitable parameter estimation methods in these models, making it a valuable resource for all researchers in this field. The book is addressed to specialists and researchers in the theory and statistics of stochastic processes, practitioners who apply statistical methods of parameter estimation, graduate and post-graduate students who study mathematical modeling and statistics.
The ability to effective learn, process, and retain new information is critical to the success of any student. Since mathematics are becoming increasingly more important in our educational systems, it is imperative that we devise an efficient system to measure these types of information recall. Assessing and Measuring Statistics Cognition in Higher Education Online Environments: Emerging Research and Opportunities is a critical reference source that overviews the current state of higher education learning assessment systems. Featuring extensive coverage on relevant topics such as statistical cognitions, online learning implications, cognitive development, and curricular mismatches, this publication is ideally designed for academics, students, educators, professionals, and researchers seeking innovative perspectives on current assessment and measurement systems within our educational facilities.
A unified introduction to a variety of computational algorithms for likelihood and Bayesian inference. This third edition expands the discussion of many of the techniques presented, and includes additional examples as well as exercise sets at the end of each chapter.
The approach of layer-damping coordinate transformations to treat singularly perturbed equations is a relatively new, and fast growing area in the field of applied mathematics. This monograph aims to present a clear, concise, and easily understandable description of the qualitative properties of solutions to singularly perturbed problems as well as of the essential elements, methods and codes of the technology adjusted to numerical solutions of equations with singularities by applying layer-damping coordinate transformations and corresponding layer-resolving grids. The first part of the book deals with an analytical study of estimates of the solutions and their derivatives in layers of singularities as well as suitable techniques for obtaining results. In the second part, a technique for building the coordinate transformations eliminating boundary and interior layers, is presented. Numerical algorithms based on the technique which is developed for generating layer-damping coordinate transformations and their corresponding layer-resolving meshes are presented in the final part of this volume. This book will be of value and interest to researchers in computational and applied mathematics.
This book offers a comprehensive guide to the modelling of operational risk using possibility theory. It provides a set of methods for measuring operational risks under a certain degree of vagueness and impreciseness, as encountered in real-life data. It shows how possibility theory and indeterminate uncertainty-encompassing degrees of belief can be applied in analysing the risk function, and describes the parametric g-and-h distribution associated with extreme value theory as an interesting candidate in this regard. The book offers a complete assessment of fuzzy methods for determining both value at risk (VaR) and subjective value at risk (SVaR), together with a stability estimation of VaR and SVaR. Based on the simulation studies and case studies reported on here, the possibilistic quantification of risk performs consistently better than the probabilistic model. Risk is evaluated by integrating two fuzzy techniques: the fuzzy analytic hierarchy process and the fuzzy extension of techniques for order preference by similarity to the ideal solution. Because of its specialized content, it is primarily intended for postgraduates and researchers with a basic knowledge of algebra and calculus, and can be used as reference guide for research-level courses on fuzzy sets, possibility theory and mathematical finance. The book also offers a useful source of information for banking and finance professionals investigating different risk-related aspects.
This volume presents a selection of papers by Henry P. McKean, which illustrate the various areas in mathematics in which he has made seminal contributions. Topics covered include probability theory, integrable systems, geometry and financial mathematics. Each paper represents a contribution by Prof. McKean, either alone or together with other researchers, that has had a profound influence in the respective area.
The changes of populations are determined by fertility, mortality and migration. On the national level, international migration is a factor of increasing demographic, economic, social and political importance. This book addresses the debate on the impact of international migration and economic activity on population and labour force resources in future. It presents a study conducted for 27 European countries, looking 50 years ahead (2002-2052). An extended discussion of theories and factors underlying the assumed evolution of the components of change and economic activity is included as well as a detailed analysis of the historical trends. These theoretical and empirical considerations lead to defining scenarios of future mortality, fertility, economic activity and international migration, which have been fed into a projection model, producing various future population dynamics and labour force trajectories. In addition, simulations have been made to estimate the size of replacement migration needed to maintain selected demographic and labour market parameters in the countries of Europe. The results presented in this book allow researchers, governments and policy makers to evaluate to what extent various migration and labour market policies may be instrumental in achieving the desired population and labour size and structures. The secondary purpose of this volume is to reveal the methodology and argumentation lying behind a complex population forecasting and simulation exercise, which is not done frequently, but is critical for the assessment of the forecasts and also valuable from a purely didactic point of view.
This book deals with the theory and applications of the Reformulation- Linearization/Convexification Technique (RL T) for solving nonconvex optimization problems. A unified treatment of discrete and continuous nonconvex programming problems is presented using this approach. In essence, the bridge between these two types of nonconvexities is made via a polynomial representation of discrete constraints. For example, the binariness on a 0-1 variable x . can be equivalently J expressed as the polynomial constraint x . (1-x . ) = 0. The motivation for this book is J J the role of tight linear/convex programming representations or relaxations in solving such discrete and continuous nonconvex programming problems. The principal thrust is to commence with a model that affords a useful representation and structure, and then to further strengthen this representation through automatic reformulation and constraint generation techniques. As mentioned above, the focal point of this book is the development and application of RL T for use as an automatic reformulation procedure, and also, to generate strong valid inequalities. The RLT operates in two phases. In the Reformulation Phase, certain types of additional implied polynomial constraints, that include the aforementioned constraints in the case of binary variables, are appended to the problem. The resulting problem is subsequently linearized, except that certain convex constraints are sometimes retained in XV particular special cases, in the Linearization/Convexijication Phase. This is done via the definition of suitable new variables to replace each distinct variable-product term. The higher dimensional representation yields a linear (or convex) programming relaxation. |
You may like...
Pearson Edexcel International A Level…
Joe Skrakowski, Harry Smith
Paperback
R875
Discovery Miles 8 750
Fourteenth Annual Report of the Bureau…
Charles J Schonfarber Fox
Hardcover
R958
Discovery Miles 9 580
Time Series Analysis - With Applications…
Jonathan D. Cryer, Kung-Sik Chan
Hardcover
R2,549
Discovery Miles 25 490
Statistics For Business And Economics
David Anderson, James Cochran, …
Paperback
(1)
R2,342 Discovery Miles 23 420
|