![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
Functional Gaussian Approximation for Dependent Structures develops and analyses mathematical models for phenomena that evolve in time and influence each another. It provides a better understanding of the structure and asymptotic behaviour of stochastic processes. Two approaches are taken. Firstly, the authors present tools for dealing with the dependent structures used to obtain normal approximations. Secondly, they apply normal approximations to various examples. The main tools consist of inequalities for dependent sequences of random variables, leading to limit theorems, including the functional central limit theorem and functional moderate deviation principle. The results point out large classes of dependent random variables which satisfy invariance principles, making possible the statistical study of data coming from stochastic processes both with short and long memory. The dependence structures considered throughout the book include the traditional mixing structures, martingale-like structures, and weakly negatively dependent structures, which link the notion of mixing to the notions of association and negative dependence. Several applications are carefully selected to exhibit the importance of the theoretical results. They include random walks in random scenery and determinantal processes. In addition, due to their importance in analysing new data in economics, linear processes with dependent innovations will also be considered and analysed.
Peirce's Scientific Metaphysics is the first book devoted to understanding Charles Sanders Peirce's (1839-1914) metaphysics from the perspective of the scientific questions that motivated his thinking. Deftly situating Peirce's often original and pathbreaking ideas within their appropriate historical and scientific contexts, Reynolds traces his reliance upon the law of large numbers, which illustrated for Peirce the emergence of a stable order and regularity from a multitude of chance events, throughout his writings on late nineteenth-century physics, chemistry, biology, psychology, and cosmology. Along the way, Peirce's vision of an indeterministic and evolutionary cosmology is contrasted with the thought of other important late nineteenth-century scientists and philosophers, such as James Clerk Maxwell, Ludwig Boltzmann, William Thomson (Lord Kelvin), Herbert Spencer, Charles Darwin, and Ernst Haeckel. While offering a detailed account of the scientific ideas and theories essential for understanding Peirce's metaphysical system (e.g., the irreversibility of time and the reversibility of physical laws, the statistical law of large numbers), this book is written in a manner accessible to the non-specialist. This will make it especially attractive to students of Peirce's philosophy who lack familiarity with the scientific and mathematical ideas that are so central to his thought. Those with an interest in the history and philosophy of science, especially concerning the application of statistical and probabilistic thinking to physics, chemistry, biology, psychology, and cosmology, will find this discussion of Peirce's philosophy invaluable.
This book focuses on the spatial distribution of landslide hazards of the Darjeeling Himalayas. Knowledge driven methods and statistical techniques such as frequency ratio model (FRM), information value model (IVM), logistic regression model (LRM), index overlay model (IOM), certainty factor model (CFM), analytical hierarchy process (AHP), artificial neural network model (ANN), and fuzzy logic have been adopted to identify landslide susceptibility. In addition, a comparison between various statistical models were made using success rate cure (SRC) and it was found that artificial neural network model (ANN), certainty factor model (CFM) and frequency ratio based fuzzy logic approach are the most reliable statistical techniques in the assessment and prediction of landslide susceptibility in the Darjeeling Himalayas. The study identified very high, high, moderate, low and very low landslide susceptibility locations to take site-specific management options as well as to ensure developmental activities in theDarjeeling Himalayas. Particular attention is given to the assessment of various geomorphic, geotectonic and geohydrologic attributes that help to understand the role of different factors and corresponding classes in landslides, to apply different models, and to monitor and predict landslides. The use of various statistical and physical models to estimate landslide susceptibility is also discussed. The causes, mechanisms and types of landslides and their destructive character are elaborated in the book. Researchers interested in applying statistical tools for hazard zonation purposes will find the book appealing.
This open access book demonstrates how data quality issues affect all surveys and proposes methods that can be utilised to deal with the observable components of survey error in a statistically sound manner. This book begins by profiling the post-Apartheid period in South Africa's history when the sampling frame and survey methodology for household surveys was undergoing periodic changes due to the changing geopolitical landscape in the country. This book profiles how different components of error had disproportionate magnitudes in different survey years, including coverage error, sampling error, nonresponse error, measurement error, processing error and adjustment error. The parameters of interest concern the earnings distribution, but despite this outcome of interest, the discussion is generalizable to any question in a random sample survey of households or firms. This book then investigates questionnaire design and item nonresponse by building a response propensity model for the employee income question in two South African labour market surveys: the October Household Survey (OHS, 1997-1999) and the Labour Force Survey (LFS, 2000-2003). This time period isolates a period of changing questionnaire design for the income question. Finally, this book is concerned with how to employee income data with a mixture of continuous data, bounded response data and nonresponse. A variable with this mixture of data types is called coarse data. Because the income question consists of two parts -- an initial, exact income question and a bounded income follow-up question -- the resulting statistical distribution of employee income is both continuous and discrete. The book shows researchers how to appropriately deal with coarse income data using multiple imputation. The take-home message from this book is that researchers have a responsibility to treat data quality concerns in a statistically sound manner, rather than making adjustments to public-use data in arbitrary ways, often underpinned by undefensible assumptions about an implicit unobservable loss function in the data. The demonstration of how this can be done provides a replicable concept map with applicable methods that can be utilised in any sample survey.
Expert practical and theoretical coverage of runs and scans This volume presents both theoretical and applied aspects of runs and scans, and illustrates their important role in reliability analysis through various applications from science and engineering. Runs and Scans with Applications presents new and exciting content in a systematic and cohesive way in a single comprehensive volume, complete with relevant approximations and explanations of some limit theorems. The authors provide detailed discussions of both classical and current problems, such as:
Runs and Scans with Applications offers broad coverage of the subject in the context of reliability and life-testing settings and serves as an authoritative reference for students and professionals alike.
This is the second of a two-volume series on sampling theory. The mathematical foundations were laid in the first volume, and this book surveys the many applications of sampling theory both within mathematics and in other areas of science. Many of the topics covered here are not found in other books, and all are given an up to date treatment bringing the reader's knowledge up to research level. This book consists of ten chapters, written by ten different teams of authors, and the contents range over a wide variety of topics including combinatorial analysis, number theory, neural networks, derivative sampling, wavelets, stochastic signals, random fields, and abstract harmonic analysis. There is a comprehensive, up to date bibliography.
This book provides a coherent framework for understanding shrinkage estimation in statistics. The term refers to modifying a classical estimator by moving it closer to a target which could be known a priori or arise from a model. The goal is to construct estimators with improved statistical properties. The book focuses primarily on point and loss estimation of the mean vector of multivariate normal and spherically symmetric distributions. Chapter 1 reviews the statistical and decision theoretic terminology and results that will be used throughout the book. Chapter 2 is concerned with estimating the mean vector of a multivariate normal distribution under quadratic loss from a frequentist perspective. In Chapter 3 the authors take a Bayesian view of shrinkage estimation in the normal setting. Chapter 4 introduces the general classes of spherically and elliptically symmetric distributions. Point and loss estimation for these broad classes are studied in subsequent chapters. In particular, Chapter 5 extends many of the results from Chapters 2 and 3 to spherically and elliptically symmetric distributions. Chapter 6 considers the general linear model with spherically symmetric error distributions when a residual vector is available. Chapter 7 then considers the problem of estimating a location vector which is constrained to lie in a convex set. Much of the chapter is devoted to one of two types of constraint sets, balls and polyhedral cones. In Chapter 8 the authors focus on loss estimation and data-dependent evidence reports. Appendices cover a number of technical topics including weakly differentiable functions; examples where Stein's identity doesn't hold; Stein's lemma and Stokes' theorem for smooth boundaries; harmonic, superharmonic and subharmonic functions; and modified Bessel functions.
This book discusses risk management, product pricing, capital management and Return on Equity comprehensively and seamlessly. Strategic planning, including the required quantitative methods, is an essential part of bank management and control. A thorough introduction to the advanced methods of risk management for Credit Risk, Counterparty Credit Risk, Market Risk, Operational Risk and Risk Aggregation is provided. In addition, directly applicable concepts and data such as macroeconomic scenarios for strategic planning and stress testing as well as detailed scenarios for Operational Risk and advanced concepts for Credit Risk are presented in straightforward language. The book highlights the implications and chances of the Basel III and Basel IV implementations (2022 onwards), especially in terms of capital management and Return on Equity. A wealth of essential background information from practice, international observations and comparisons, along with numerous illustrative examples, make this book a useful resource for established and future professionals in bank management, risk management, capital management, controlling and accounting.
Providing a clear explanation of the fundamental theory of time
series analysis and forecasting, this book couples theory with
applications of two popular statistical packages--SAS and SPSS. The
text examines moving average, exponential smoothing, Census X-11
deseasonalization, ARIMA, intervention, transfer function, and
autoregressive error models and has brief discussions of ARCH and
GARCH models. The book features treatments of forecast improvement
with regression and autoregression combination models and model and
forecast evaluation, along with a sample size analysis for common
time series models to attain adequate statistical power. To enhance
the book's value as a teaching tool, the data sets and programs
used in the book are made available on the Academic Press Web site.
The careful linkage of the theoretical constructs with the
practical considerations involved in utilizing the statistical
packages makes it easy for the user to properly apply these
techniques.
Quantum mechanics is arguably one of the most successful scientific theories ever and its applications to chemistry, optics, and information theory are innumerable. This book provides the reader with a rigorous treatment of the main mathematical tools from harmonic analysis which play an essential role in the modern formulation of quantum mechanics. This allows us at the same time to suggest some new ideas and methods, with a special focus on topics such as the Wigner phase space formalism and its applications to the theory of the density operator and its entanglement properties. This book can be used with profit by advanced undergraduate students in mathematics and physics, as well as by confirmed researchers.
This book brings together carefully selected, peer-reviewed works on mathematical biology presented at the BIOMAT International Symposium on Mathematical and Computational Biology, which was held at the Institute of Numerical Mathematics, Russian Academy of Sciences, in October 2017, in Moscow. Topics covered include, but are not limited to, the evolution of spatial patterns on metapopulations, problems related to cardiovascular diseases and modeled by boundary control techniques in hemodynamics, algebraic modeling of the genetic code, and multi-step biochemical pathways. Also, new results are presented on topics like pattern recognition of probability distribution of amino acids, somitogenesis through reaction-diffusion models, mathematical modeling of infectious diseases, and many others. Experts, scientific practitioners, graduate students and professionals working in various interdisciplinary fields will find this book a rich resource for research and applications alike.
Recently, the use of statistical tools, methodologies, and models in human resource management (HRM) has increased because of human resources (HR) analytics and predictive HR decision making. To utilize these technological tools, HR managers and students must increase their knowledge of the resources' optimum application. Statistical Tools and Analysis in Human Resources Management is a critical scholarly resource that presents in-depth details on the application of statistics in every sphere of HR functions for optimal decision-making and analytical solutions. Featuring coverage on a broad range of topics such as leadership, industrial relations, training and development, and diversity management, this book is geared towards managers, professionals, upper-level students, administrators, and researchers seeking current information on the integration of HRM technologies.
The revision of this well-respected text presents a balanced approach of the classical and Bayesian methods and now includes a chapter on simulation (including Markov chain Monte Carlo and the Bootstrap), coverage of residual analysis in linear models, and many examples using real data. Probability & Statistics, Fourth Edition, was written for a one- or two-semester probability and statistics course. This course is offered primarily at four-year institutions and taken mostly by sophomore and junior level students majoring in mathematics or statistics. Calculus is a prerequisite, and a familiarity with the concepts and elementary properties of vectors and matrices is a plus.
Molecular-Genetic and Statistical Techniques for Behavioral and Neural Research presents the most exciting molecular and recombinant DNA techniques used in the analysis of brain function and behavior, a critical piece of the puzzle for clinicians, scientists, course instructors and advanced undergraduate and graduate students. Chapters examine neuroinformatics, genetic and neurobehavioral databases and data mining, also providing an analysis of natural genetic variation and principles and applications of forward (mutagenesis) and reverse genetics (gene targeting). In addition, the book discusses gene expression and its role in brain function and behavior, along with ethical issues in the use of animals in genetics testing. Written and edited by leading international experts, this book provides a clear presentation of the frontiers of basic research as well as translationally relevant techniques that are used by neurobehavioral geneticists.
Classic and modern statistical spectral estimation and time-series algorithms are applied on the landmark Byzantine Music recordings of I. Nafpliotis (1864-1942) to uncover the tonic intervals of the diatonic scale. His intervals are then compared to different theoretical and experimental results within a psychophysical framework of human pitch discrimination. In this attempt of "statistical archeology," the results are as enlightening as the master chanter himself.
Multilevel modelling facilitates the analysis of hierarchical data where observations may be nested within higher levels of classification. In health care research, for example, a study may be undertaken to determine the variability of patient outcomes where these also vary by hospital or health care region. Inference can then be made on the efficacy of health care practices.
Health care professionals and public health researchers interested in the application of statistics will benefit greatly from this text. It will also be of interest to postgraduate students studying medical statistics.
The first edition of Theory of Rank Tests (1967) has been the
precursor to a unified and theoretically motivated treatise of the
basic theory of tests based on ranks of the sample observations.
For more than 25 years, it helped raise a generation of
statisticians in cultivating their theoretical research in this
fertile area, as well as in using these tools in their application
oriented research. The present edition not only aims to revive this
classical text by updating the findings but also by incorporating
several other important areas which were either not properly
developed before 1965 or have gone through an evolutionary
development during the past 30 years. This edition therefore aims
to fulfill the needs of academic as well as professional
statisticians who want to pursue nonparametrics in their academic
projects, consultation, and applied research works.
Handbook of Statistics: Disease Modelling and Public Health, Part B, Volume 37 addresses new challenges in existing and emerging diseases. As a two part volume, this title covers an extensive range of techniques in the field, with this book including chapters on Reaction diffusion equations and their application on bacterial communication, Spike and slab methods in disease modeling, Mathematical modeling of mass screening and parameter estimation, Individual-based and agent-based models for infectious disease transmission and evolution: an overview, and a section on Visual Clustering of Static and Dynamic High Dimensional Data. This volume covers the lack of availability of complete data relating to disease symptoms and disease epidemiology, one of the biggest challenges facing vaccine developers, public health planners, epidemiologists and health sector researchers.
This book focuses on recent advances, approaches, theories and applications related to mixture models. In particular, it presents recent unsupervised and semi-supervised frameworks that consider mixture models as their main tool. The chapters considers mixture models involving several interesting and challenging problems such as parameters estimation, model selection, feature selection, etc. The goal of this book is to summarize the recent advances and modern approaches related to these problems. Each contributor presents novel research, a practical study, or novel applications based on mixture models, or a survey of the literature. Reports advances on classic problems in mixture modeling such as parameter estimation, model selection, and feature selection; Present theoretical and practical developments in mixture-based modeling and their importance in different applications; Discusses perspectives and challenging future works related to mixture modeling.
A comprehensive and accessible guide to panel data analysis using EViews software This book explores the use of EViews software in creating panel data analysis using appropriate empirical models and real datasets. Guidance is given on developing alternative descriptive statistical summaries for evaluation and providing policy analysis based on pool panel data. Various alternative models based on panel data are explored, including univariate general linear models, fixed effect models and causal models, and guidance on the advantages and disadvantages of each one is given. Panel Data Analysis using EViews : Provides step-by-step guidance on how to apply EViews software to panel data analysis using appropriate empirical models and real datasets. Examines a variety of panel data models along with the author s own empirical findings, demonstrating the advantages and limitations of each model. Presents growth models, time-related effects models, and polynomial models, in addition to the models which are commonly applied for panel data. Includes more than 250 examples divided into three groups of models (stacked, unstacked, and structured panel data), together with notes and comments. Provides guidance on which models not to use in a given scenario, along with advice on viable alternatives. Explores recent new developments in panel data analysis An essential tool for advanced undergraduate or graduate students and applied researchers in finance, econometrics and population studies. Statisticians and data analysts involved with data collected over long time periods will also find this book a useful resource.
This book describes computational problems related to kernel density estimation (KDE) - one of the most important and widely used data smoothing techniques. A very detailed description of novel FFT-based algorithms for both KDE computations and bandwidth selection are presented. The theory of KDE appears to have matured and is now well developed and understood. However, there is not much progress observed in terms of performance improvements. This book is an attempt to remedy this. The book primarily addresses researchers and advanced graduate or postgraduate students who are interested in KDE and its computational aspects. The book contains both some background and much more sophisticated material, hence also more experienced researchers in the KDE area may find it interesting. The presented material is richly illustrated with many numerical examples using both artificial and real datasets. Also, a number of practical applications related to KDE are presented. |
You may like...
Order Statistics: Applications, Volume…
Narayanaswamy Balakrishnan, C.R. Rao
Hardcover
R3,377
Discovery Miles 33 770
Statistics For Business And Economics
David Anderson, James Cochran, …
Paperback
(1)
Stochastic Processes and Their…
Christo Ananth, N. Anbazhagan, …
Hardcover
R6,687
Discovery Miles 66 870
Statistics for Management and Economics
Gerald Keller, Nicoleta Gaciu
Paperback
Fundamentals of Social Research Methods
Claire Bless, Craig Higson-Smith, …
Paperback
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
|