![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
In this book, an integrated introduction to the statistical inference is provided from a frequentist likelihood-based viewpoint. Classical results are presented together with recent developments largely built upon ideas due to R.A. Fisher. After a unified review of background material (statistical methods, likelihood, data reductions, first-order asymptotics) and inference in the presence of nuisance parameters (including pseufo-likelihoods), a self-contained introduction is given to exponential families, exponential dispersion models, generalized linear models, and group families. Finally, basic results of higher-order asymptotics are introduced (index notation, asymptotic expansions for statistics and distributions, and major applications to likelihood inference). The emphasis is more on general concepts and methods than on regularity conditions. Many examples are given for specific statistical models. Each chapter is supplemented with exercises, problems and bibliographic notes. This volume can serve as a textbook in intermediate-level undergraduate courses.
The book covers the basic theory of linear regression models and presents a comprehensive survey of different estimation techniques as alternatives and complements to least squares estimation. The relationship between different estimators is clearly described and categories of estimators are worked out in detail. Proofs are given for the most relevant results, and the presented methods are illustrated with the help of numerical examples and graphics. Special emphasis is laid on the practicability, and possible applications are discussed. The book is rounded off by an introduction to the basics of decision theory and an appendix on matrix algebra.
Presenting the latest findings in topics from across the mathematical spectrum, this volume includes results in pure mathematics along with a range of new advances and novel applications to other fields such as probability, statistics, biology, and computer science. All contributions feature authors who attended the Association for Women in Mathematics Research Symposium in 2015: this conference, the third in a series of biennial conferences organized by the Association, attracted over 330 participants and showcased the research of women mathematicians from academia, industry, and government.
Although statistical design is one of the oldest branches of statistics, its importance is ever increasing, especially in the face of the data flood that often faces statisticians. It is important to recognize the appropriate design, and to understand how to effectively implement it, being aware that the default settings from a computer package can easily provide an incorrect analysis. The goal of this book is to describe the principles that drive good design, paying attention to both the theoretical background and the problems arising from real experimental situations. Designs are motivated through actual experiments, ranging from the timeless agricultural randomized complete block, to microarray experiments, which naturally lead to split plot designs and balanced incomplete blocks.
In this book, an integrated introduction to statistical inference is provided from a frequentist likelihood-based viewpoint. Classical results are presented together with recent developments, largely built upon ideas due to R.A. Fisher. The term "neo-Fisherian" highlights this.After a unified review of background material (statistical models, likelihood, data and model reduction, first-order asymptotics) and inference in the presence of nuisance parameters (including pseudo-likelihoods), a self-contained introduction is given to exponential families, exponential dispersion models, generalized linear models, and group families. Finally, basic results of higher-order asymptotics are introduced (index notation, asymptotic expansions for statistics and distributions, and major applications to likelihood inference).The emphasis is more on general concepts and methods than on regularity conditions. Many examples are given for specific statistical models. Each chapter is supplemented with problems and bibliographic notes. This volume can serve as a textbook in intermediate-level undergraduate and postgraduate courses in statistical inference.
Written by one of the top most statisticians with experience in diverse fields of applications of statistics, the book deals with the philosophical and methodological aspects of information technology, collection and analysis of data to provide insight into a problem, whether it is scientific research, policy making by government or decision making in our daily lives.The author dispels the doubts that chance is an expression of our ignorance which makes accurate prediction impossible and illustrates how our thinking has changed with quantification of uncertainty by showing that chance is no longer the obstructor but a way of expressing our knowledge. Indeed, chance can create and help in the investigation of truth. It is eloquently demonstrated with numerous examples of applications that statistics is the science, technology and art of extracting information from data and is based on a study of the laws of chance. It is highlighted how statistical ideas played a vital role in scientific and other investigations even before statistics was recognized as a separate discipline and how statistics is now evolving as a versatile, powerful and inevitable tool in diverse fields of human endeavor such as literature, legal matters, industry, archaeology and medicine.Use of statistics to the layman in improving the quality of life through wise decision making is emphasized.
This book presents practical approaches for the analysis of data from gene expression microarrays. Each chapter describes the conceptual and methodological underpinning for a statistical tool and its implementation in software. Methods cover all aspects of statistical analysis of microarrays, from annotation and filtering to clustering and classification. Chapters are written by the developers of the software. All software packages described are free to academic users. The book includes coverage of various packages that are part of the Bioconductor project and several related R tools. The materials presented cover a range of software tools designed for varied audiences. Some chapters describe simple menu-driven software in a user-friendly fashion, and are designed to be accessible to microarray data analysts without formal quantitative training. Most chapters are directed at microarray data analysts with master-level training in computer science, biostatistics or bioinformatics. A minority of more advanced chapters are intended for doctoral students and researchers. The team of editors is from the Johns Hopkins Schools of Medicine and Public Health and has been involved with developing methods and software for microarray data analysis since the inception of this technology. Giovanni Parmigiani is Associate Professor of Oncology, Pathology and Biostatistics. He is the author of the book on "Modeling in Medical decision Making," a fellow of the ASA, and a recipient of the Savage Awards for Bayesian statistics. Elizabeth S. Garrett is Assistant Professor of Oncology and Biostatistics, and recipient of the Abbey Award for statistical education. Rafael A Irizarry is Assistant Professor of Biostatistics, and recipient of the Noether Award for non-parametric statistics. Scott L. Zeger is Professor and chair of Biostatistics. He is co-author of the book "Longitudinal Data Analysis," a fellow of the ASA and recipient of the Spiegelman Award for public health statistics.
Any financial asset that is openly traded has a market price. Except for extreme market conditions, market price may be more or less than a fair value. Fair value is likely to be some complicated function of the current intrinsic value of tangible or intangible assets underlying the claim and our assessment of the characteristics of the underlying assets with respect to the expected rate of growth, future dividends, volatility, and other relevant market factors. Some of these factors that affect the price can be measured at the time of a transaction with reasonably high accuracy. Most factors, however, relate to expectations about the future and to subjective issues, such as current management, corporate policies and market environment, that could affect the future financial performance of the underlying assets. Models are thus needed to describe the stochastic factors and environment, and their implementations inevitably require computational finance tools.
A unified introduction to a variety of computational algorithms for likelihood and Bayesian inference. This third edition expands the discussion of many of the techniques presented, and includes additional examples as well as exercise sets at the end of each chapter.
Modern apparatuses allow us to collect samples of functional data, mainly curves but also images. On the other hand, nonparametric statistics produces useful tools for standard data exploration. This book links these two fields of modern statistics by explaining how functional data can be studied through parameter-free statistical ideas. At the same time it shows how functional data can be studied through parameter-free statistical ideas, and offers an original presentation of new nonparametric statistical methods for functional data analysis.
This collection contains invited papers by distinguished statisticians to honour and acknowledge the contributions of Professor Dr. Dr. Helge Toutenburg to Statistics on the occasion of his sixty-?fth birthday. These papers present the most recent developments in the area of the linear model and its related topics. Helge Toutenburg is an established statistician and currently a Professor in the Department of Statistics at the University of Munich (Germany) and Guest Professor at the University of Basel (Switzerland). He studied Mathematics in his early years at Berlin and specialized in Statistics. Later he completed his dissertation (Dr. rer. nat. ) in 1969 on optimal prediction procedures at the University of Berlin and completed the post-doctoral thesis in 1989 at the University of Dortmund on the topic of mean squared error superiority. He taught at the Universities of Berlin, Dortmund and Regensburg before joining the University of Munich in 1991. He has various areas of interest in which he has authored and co-authored over 130 research articles and 17 books. He has made pioneering contributions in several areas of statistics, including linear inference, linear models, regression analysis, quality engineering, Taguchi methods, analysis of variance, design of experiments, and statistics in medicine and dentistry.
This book deals with the theory and applications of the Reformulation- Linearization/Convexification Technique (RL T) for solving nonconvex optimization problems. A unified treatment of discrete and continuous nonconvex programming problems is presented using this approach. In essence, the bridge between these two types of nonconvexities is made via a polynomial representation of discrete constraints. For example, the binariness on a 0-1 variable x . can be equivalently J expressed as the polynomial constraint x . (1-x . ) = 0. The motivation for this book is J J the role of tight linear/convex programming representations or relaxations in solving such discrete and continuous nonconvex programming problems. The principal thrust is to commence with a model that affords a useful representation and structure, and then to further strengthen this representation through automatic reformulation and constraint generation techniques. As mentioned above, the focal point of this book is the development and application of RL T for use as an automatic reformulation procedure, and also, to generate strong valid inequalities. The RLT operates in two phases. In the Reformulation Phase, certain types of additional implied polynomial constraints, that include the aforementioned constraints in the case of binary variables, are appended to the problem. The resulting problem is subsequently linearized, except that certain convex constraints are sometimes retained in XV particular special cases, in the Linearization/Convexijication Phase. This is done via the definition of suitable new variables to replace each distinct variable-product term. The higher dimensional representation yields a linear (or convex) programming relaxation.
The book aims to investigate methods and techniques for spatial statistical analysis suitable to model spatial information in support of decision systems. Over the last few years there has been a considerable interest in these tools and in the role they can play in spatial planning and environmental modelling. One of the earliest and most famous definition of spatial planning was a geographical expression to the economic, social, cultural and ecological policies of society: borrowing from this point of view, this text shows how an interdisciplinary approach is an effective way to an harmonious integration of national policies with regional and local analysis. A wide range of spatial models and techniques is, also, covered: spatial data mining, point processes analysis, nearest neighbor statistics and cluster detection, Fuzzy Regression model and local indicators of spatial association; all of these tools provide the policy-maker with a valuable support to policy development. "
This Festschrift contains five research surveys and thirty-four shorter contributions by participants of the conference ''Stochastic Partial Differential Equations and Related Fields'' hosted by the Faculty of Mathematics at Bielefeld University, October 10-14, 2016. The conference, attended by more than 140 participants, including PostDocs and PhD students, was held both to honor Michael Roeckner's contributions to the field on the occasion of his 60th birthday and to bring together leading scientists and young researchers to present the current state of the art and promising future developments. Each article introduces a well-described field related to Stochastic Partial Differential Equations and Stochastic Analysis in general. In particular, the longer surveys focus on Dirichlet forms and Potential theory, the analysis of Kolmogorov operators, Fokker-Planck equations in Hilbert spaces, the theory of variational solutions to stochastic partial differential equations, singular stochastic partial differential equations and their applications in mathematical physics, as well as on the theory of regularity structures and paracontrolled distributions. The numerous research surveys make the volume especially useful for graduate students and researchers who wish to start work in the above-mentioned areas, or who want to be informed about the current state of the art.
This volume presents 27 selected papers in topics that range from statistical applications in business and finance to applications in clinical trials and biomarker analysis. All papers feature original, peer-reviewed content. The editors intentionally selected papers that cover many topics so that the volume will serve the whole statistical community and a variety of research interests. The papers represent select contributions to the 21st ICSA Applied Statistics Symposium. The International Chinese Statistical Association (ICSA) Symposium took place between the 23rd and 26th of June, 2012 in Boston, Massachusetts. It was co-sponsored by the International Society for Biopharmaceutical Statistics (ISBS) and American Statistical Association (ASA). This is the inaugural proceedings volume to share research from the ICSA Applied Statistics Symposium.
A function is convex if its epigraph is convex. This geometrical structure has very strong implications in terms of continuity and differentiability. Separation theorems lead to optimality conditions and duality for convex problems. A function is quasiconvex if its lower level sets are convex. Here again, the geo metrical structure of the level sets implies some continuity and differentiability properties for quasiconvex functions. Optimality conditions and duality can be derived for optimization problems involving such functions as well. Over a period of about fifty years, quasiconvex and other generalized convex functions have been considered in a variety of fields including economies, man agement science, engineering, probability and applied sciences in accordance with the need of particular applications. During the last twenty-five years, an increase of research activities in this field has been witnessed. More recently generalized monotonicity of maps has been studied. It relates to generalized convexity off unctions as monotonicity relates to convexity. Generalized monotonicity plays a role in variational inequality problems, complementarity problems and more generally, in equilibrium prob lems."
These three volumes comprise the proceedings of the US/Japan Conference, held in honour of Professor H. Akaike, on the Frontiers of Statistical Modeling: an Informational Approach'. The major theme of the conference was the implementation of statistical modeling through an informational approach to complex, real-world problems. Volume 1 contains papers which deal with the Theory and Methodology of Time Series Analysis. Volume 1 also contains the text of the Banquet talk by E. Parzen and the keynote lecture of H. Akaike. Volume 2 is devoted to the general topic of Multivariate Statistical Modeling, and Volume 3 contains the papers relating to Engineering and Scientific Applications. For all scientists whose work involves statistics.
This book outlines Bayesian statistical analysis in great detail, from the development of a model through the process of making statistical inference. The key feature of this book is that it covers models that are most commonly used in social science research - including the linear regression model, generalized linear models, hierarchical models, and multivariate regression models - and it thoroughly develops each real-data example in painstaking detail.
This book contains selected contributions from the geoENV96 - First European Conference on Geostatistics for Environmental Applications, held in Lisbon in November 1996. This is the first of a geoENV series of biennial planned books. The series is intended to show the state of the art of geostatistics in environmental applications with new cases, results and relevant discussions from leading researchers and practitioners around the world. New and important theoretical and practical developments of geostatistics in the environmental field were compiled from three main areas: Hydrology, Groundwater and Groundwater Contamination Soil Contamination and Site Remediation Air Pollution, Ecology and Other Applications The book presents a set of geostatistical tools and approaches used to successfully resolve a variety of specific problems in environment modelling, especially those resulting from the typical scarcity of spatial sampling, the time component of very dynamic systems, the modelling of various systems of contaminants, the uncertainty assessment of health cost functions, etc. Prominent topics concerning methodological tools and methods, stochastic simulation techniques, models of integrating soft information (seismic and remote sensing images), inverse modelling of groundwater flow, neural network classification, change of support and up-scaling are also included in this book. This publication will be of great interest and practical value to geostatisticians working both in universities and in industry.
Often a statistical analysis involves use of a set of alternative models for the data. A "model-selection criterion" is a formula which provides a figure-of merit for the alternative models. Generally the alternative models will involve different numhers of parameters. Model-selection criteria take into account hoth the goodness-or-fit of a model and the numher of parameters used to achieve that fit. 1.1. SETS OF ALTERNATIVE MODELS Thus the focus in this paper is on data-analytic situations ill which there is consideration of a set of alternative models. Choice of a suhset of explanatory variahles in regression, the degree of a polynomial regression, the number of factors in factor analysis, or the numher of dusters in duster analysis are examples of such situations. 1.2. MODEL SELECTION VERSUS HYPOTHESIS TESTING In exploratory data analysis or in a preliminary phase of inference an approach hased on model-selection criteria can offer advantages over tests of hypotheses. The model-selection approach avoids the prohlem of specifying error rates for the tests. With model selection the focus can he on simultaneous competition between a hroad dass of competing models rather than on consideration of a sequence of simpler and simpler models."
This is a new, completely revised, updated and enlarged edition of the author's Ergebnisse vol. 46: "Spin Glasses: A Challenge for Mathematicians" in two volumes (this is the 2nd volume). In the eighties, a group of theoretical physicists introduced several models for certain disordered systems, called "spin glasses." These models are simple and rather canonical random structures, of considerable interest for several branches of science (statistical physics, neural networks and computer science). The physicists studied them by non-rigorous methods and predicted spectacular behaviors. This book introduces in a rigorous manner this exciting new area to the mathematically minded reader. It requires no knowledge whatsoever of any physics. The present Volume II contains a considerable amount of new material, in particular all the fundamental low-temperature results obtained after the publication of the first edition.
The author investigates athermal fluctuation from the viewpoints of statistical mechanics in this thesis. Stochastic methods are theoretically very powerful in describing fluctuation of thermodynamic quantities in small systems on the level of a single trajectory and have been recently developed on the basis of stochastic thermodynamics. This thesis proposes, for the first time, a systematic framework to describe athermal fluctuation, developing stochastic thermodynamics for non-Gaussian processes, while thermal fluctuations are mainly addressed from the viewpoint of Gaussian stochastic processes in most of the conventional studies. First, the book provides an elementary introduction to the stochastic processes and stochastic thermodynamics. The author derives a Langevin-like equation with non-Gaussian noise as a minimal stochastic model for athermal systems, and its analytical solution by developing systematic expansions is shown as the main result. Furthermore, the a uthor shows a thermodynamic framework for such non-Gaussian fluctuations, and studies some thermodynamics phenomena, i.e. heat conduction and energy pumping, which shows distinct characteristics from conventional thermodynamics. The theory introduced in the book would be a systematic foundation to describe dynamics of athermal fluctuation quantitatively and to analyze their thermodynamic properties on the basis of stochastic methods.
This book covers the basic statistical and analytical techniques of computer intrusion detection. It is aimed at both statisticians looking to become involved in the data analysis aspects of computer security and computer scientists looking to expand their toolbox of techniques for detecting intruders. The book is self-contained, assumng no expertise in either computer security or statistics. It begins with a description of the basics of TCP/IP, followed by chapters dealing with network traffic analysis, network monitoring for intrusion detection, host based intrusion detection, and computer viruses and other malicious code. Each section develops the necessary tools as needed. There is an extensive discussion of visualization as it relates to network data and intrusion detection. The book also contains a large bibliography covering the statistical, machine learning, and pattern recognition literature related to network monitoring and intrusion detection. David Marchette is a scientist at the Naval Surface Warfacre Center in Dalhgren, Virginia. He has worked at Navy labs for 15 years, doing research in pattern recognition, computational statistics, and image analysis. He has been a fellow by courtesy in the mathematical sciences department of the Johns Hopkins University since 2000. He has been working in conputer intrusion detection for several years, focusing on statistical methods for anomaly detection and visualization. Dr. Marchette received a Masters in Mathematics from the University of California, San Diego in 1982 and a Ph.D. in Computational Sciences and Informatics from George Mason University in 1996.
Gian-Carlo Rota was born in Vigevano, Italy, in 1932. He died in Cambridge, Mas sachusetts, in 1999. He had several careers, most notably as a mathematician, but also as a philosopher and a consultant to the United States government. His mathe matical career was equally varied. His early mathematical studies were at Princeton (1950 to 1953) and Yale (1953 to 1956). In 1956, he completed his doctoral thesis under the direction of Jacob T. Schwartz. This thesis was published as the pa per "Extension theory of differential operators I", the first paper reprinted in this volume. Rota's early work was in analysis, more specifically, in operator theory, differ ential equations, ergodic theory, and probability theory. In the 1960's, Rota was motivated by problems in fluctuation theory to study some operator identities of Glen Baxter (see [7]). Together with other problems in probability theory, this led Rota to study combinatorics. His series of papers, "On the foundations of combi natorial theory", led to a fundamental re-evaluation of the subject. Later, in the 1990's, Rota returned to some of the problems in analysis and probability theory which motivated his work in combinatorics. This was his intention all along, and his early death robbed mathematics of his unique perspective on linkages between the discrete and the continuous. Glimpses of his new research programs can be found in [2,3,6,9,10]. |
You may like...
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
Advances in Quantum Monte Carlo
Shigenori Tanaka, Stuart M. Rothstein, …
Hardcover
R5,469
Discovery Miles 54 690
Fourteenth Annual Report of the Bureau…
Charles J Schonfarber Fox
Hardcover
R982
Discovery Miles 9 820
|