![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
The modern theory of Sequential Analysis came into existence simultaneously in the United States and Great Britain in response to demands for more efficient sampling inspection procedures during World War II. The develop ments were admirably summarized by their principal architect, A. Wald, in his book Sequential Analysis (1947). In spite of the extraordinary accomplishments of this period, there remained some dissatisfaction with the sequential probability ratio test and Wald's analysis of it. (i) The open-ended continuation region with the concomitant possibility of taking an arbitrarily large number of observations seems intol erable in practice. (ii) Wald's elegant approximations based on "neglecting the excess" of the log likelihood ratio over the stopping boundaries are not especially accurate and do not allow one to study the effect oftaking observa tions in groups rather than one at a time. (iii) The beautiful optimality property of the sequential probability ratio test applies only to the artificial problem of testing a simple hypothesis against a simple alternative. In response to these issues and to new motivation from the direction of controlled clinical trials numerous modifications of the sequential probability ratio test were proposed and their properties studied-often by simulation or lengthy numerical computation. (A notable exception is Anderson, 1960; see III.7.) In the past decade it has become possible to give a more complete theoretical analysis of many of the proposals and hence to understand them better."
Design and Analysis in Educational Research Using jamovi is an integrated approach to learning about research design alongside statistical analysis concepts. Strunk and Mwavita maintain a focus on applied educational research throughout the text, with practical tips and advice on how to do high-quality quantitative research. Based on their successful SPSS version of the book, the authors focus on using jamovi in this version due to its accessibility as open source software, and ease of use. The book teaches research design (including epistemology, research ethics, forming research questions, quantitative design, sampling methodologies, and design assumptions) and introductory statistical concepts (including descriptive statistics, probability theory, sampling distributions), basic statistical tests (like z and t), and ANOVA designs, including more advanced designs like the factorial ANOVA and mixed ANOVA. This textbook is tailor-made for first-level doctoral courses in research design and analysis. It will also be of interest to graduate students in education and educational research. The book includes Support Material with downloadable data sets, and new case study material from the authors for teaching on race, racism, and Black Lives Matter, available at www.routledge.com/9780367723088.
This volume presents a practical and unified approach to categorical data analysis based on the Akaike Information Criterion (AIC) and the Akaike Bayesian Information Criterion (ABIC). Conventional procedures for categorical data analysis are often inappropriate because the classical test procedures employed are too closely related to specific models. The approach described in this volume enables actual problems encountered by data analysts to be handled much more successfully. Amongst various topics explicitly dealt with are the problem of variable selection for categorical data, a Bayesian binary regression, and a nonparametric density estimator and its application to nonparametric test problems. The practical utility of the procedure developed is demonstrated by considering its application to the analysis of various data. This volume complements the volume Akaike Information Criterion Statistics which has already appeared in this series. For statisticians working in mathematics, the social, behavioural, and medical sciences, and engineering.
Random Generation of Trees is about a field on the crossroads between computer science, combinatorics and probability theory. Computer scientists need random generators for performance analysis, simulation, image synthesis, etc. In this context random generation of trees is of particular interest. The algorithms presented here are efficient and easy to code. Some aspects of Horton--Strahler numbers, programs written in C and pictures are presented in the appendices. The complexity analysis is done rigorously both in the worst and average cases. Random Generation of Trees is intended for students in computer science and applied mathematics as well as researchers interested in random generation.
The author's research has been directed towards inference involving observables rather than parameters. In this book, he brings together his views on predictive or observable inference and its advantages over parametric inference. While the book discusses a variety of approaches to prediction including those based on parametric, nonparametric, and nonstochastic statistical models, it is devoted mainly to predictive applications of the Bayesian approach. It not only substitutes predictive analyses for parametric analyses, but it also presents predictive analyses that have no real parametric analogues. It demonstrates that predictive inference can be a critical component of even strict parametric inference when dealing with interim analyses. This approach to predictive inference will be of interest to statisticians, psychologists, econometricians, and sociologists.
Hardbound. This book is a result of recent developments in several fields. Mathematicians, statisticians, finance theorists, and economists found several interconnections in their research. The emphasis was on common methods, although the applications were also interrelated.The main topic is dynamic stochastic models, in which information arrives and decisions are made sequentially. This gives rise to what finance theorists call option value, what some economists label quasi-option value. Some papers extend the mathematical theory, some deal with new methods of economic analysis, while some present important applications, to natural resources in particular.
- The book discusses the recent techniques in NGS data analysis which is the most needed material by biologists (students and researchers) in the wake of numerous genomic projects and the trend toward genomic research. - The book includes both theory and practice for the NGS data analysis. So, readers will understand the concept and learn how to do the analysis using the most recent programs. - The steps of application workflows are written in a manner that can be followed for related projects. - Each chapter includes worked examples with real data available on the NCBI databases. Programming codes and outputs are accompanied with explanation. - The book content is suitable as teaching material for biology and bioinformatics students. Meets the requirements of a complete semester course on Sequencing Data Analysis Covers the latest applications for Next Generation Sequencing Covers data reprocessing, genome assembly, variant discovery, gene profiling, epigenetics, and metagenomics
This text, combining analysis and tools from mathematical probability, focuses on a systematic and novel exposition of a recent trend in pure and applied mathematics. The emphasis is on the unity of basis constructions and their expansions (bases which are computationally efficient), and on their use in several areas: from wavelets to fractals. The aim of this book is to show how to use processes from probability, random walks on branches, and their path-space measures in the study of convergence questions from harmonic analysis, with particular emphasis on the infinite products that arise in the analysis of wavelets. The book brings together tools from engineering (especially signal/image processing) and mathematics (harmonic analysis and operator theory). audience of students and workers in a variety of fields, meeting at the crossroads where they merge; hands-on approach with generous motivation; new pedagogical features to enhance teaching techniques and experience; includes more than 34 figures with detailed captions, illustrating the main ideas and visualizing the deeper connections in the subject; separate sections explain engineering terms to mathematicians and operator theory to engineers; and, interdisciplinary presentation and approach, combining central ideas from mathematical analysis (with a twist in the direction of operator theory and harmonic analysis), probability, computation, physics, and engineering. The presentation includes numerous exercises that are essential to reinforce fundamental concepts by helping both students and applied users practice sketching functions or iterative schemes, as well as to hone computational skills. Graduate students, researchers, applied mathematicians, engineers and physicists alike will benefit from this unique work in book form that fills a gap in the literature.
Most applications generate large datasets, like social networking and social influence programs, smart cities applications, smart house environments, Cloud applications, public web sites, scientific experiments and simulations, data warehouse, monitoring platforms, and e-government services. Data grows rapidly, since applications produce continuously increasing volumes of both unstructured and structured data. Large-scale interconnected systems aim to aggregate and efficiently exploit the power of widely distributed resources. In this context, major solutions for scalability, mobility, reliability, fault tolerance and security are required to achieve high performance and to create a smart environment. The impact on data processing, transfer and storage is the need to re-evaluate the approaches and solutions to better answer the user needs. A variety of solutions for specific applications and platforms exist so a thorough and systematic analysis of existing solutions for data science, data analytics, methods and algorithms used in Big Data processing and storage environments is significant in designing and implementing a smart environment. Fundamental issues pertaining to smart environments (smart cities, ambient assisted leaving, smart houses, green houses, cyber physical systems, etc.) are reviewed. Most of the current efforts still do not adequately address the heterogeneity of different distributed systems, the interoperability between them, and the systems resilience. This book will primarily encompass practical approaches that promote research in all aspects of data processing, data analytics, data processing in different type of systems: Cluster Computing, Grid Computing, Peer-to-Peer, Cloud/Edge/Fog Computing, all involving elements of heterogeneity, having a large variety of tools and software to manage them. The main role of resource management techniques in this domain is to create the suitable frameworks for development of applications and deployment in smart environments, with respect to high performance. The book focuses on topics covering algorithms, architectures, management models, high performance computing techniques and large-scale distributed systems.
Praise for the First Edition "A very useful book for self study and reference." "Very well written. It is concise and really packs a lot of material in a valuable reference book." "An informative and well-written book . . . presented in an easy-to-understand style with many illustrative numerical examples taken from engineering and scientific studies." Practicing engineers and scientists often have a need to utilize statistical approaches to solving problems in an experimental setting. Yet many have little formal training in statistics. Statistical Design and Analysis of Experiments gives such readers a carefully selected, practical background in the statistical techniques that are most useful to experimenters and data analysts who collect, analyze, and interpret data. The First Edition of this now-classic book garnered praise in the field. Now its authors update and revise their text, incorporating readers’ suggestions as well as a number of new developments. Statistical Design and Analysis of Experiments, Second Edition emphasizes the strategy of experimentation, data analysis, and the interpretation of experimental results, presenting statistics as an integral component of experimentation from the planning stage to the presentation of conclusions. Giving an overview of the conceptual foundations of modern statistical practice, the revised text features discussions of:
Ideal for both students and professionals, this focused and cogent reference has proven to be an excellent classroom textbook with numerous examples. It deserves a place among the tools of every engineer and scientist working in an experimental setting.
Applied Linear Regression for Business Analytics with R introduces regression analysis to business students using the R programming language with a focus on illustrating and solving real-time, topical problems. Specifically, this book presents modern and relevant case studies from the business world, along with clear and concise explanations of the theory, intuition, hands-on examples, and the coding required to employ regression modeling. Each chapter includes the mathematical formulation and details of regression analysis and provides in-depth practical analysis using the R programming language.
The process of developing predictive models includes many stages. Most resources focus on the modeling algorithms but neglect other critical aspects of the modeling process. This book describes techniques for finding the best representations of predictors for modeling and for nding the best subset of predictors for improving model performance. A variety of example data sets are used to illustrate the techniques along with R programs for reproducing the results.
Sensitivity analysis and optimal shape design are key issues in engineering that have been affected by advances in numerical tools currently available. This book, and its supplementary online files, presents basic optimization techniques that can be used to compute the sensitivity of a given design to local change, or to improve its performance by local optimization of these data. The relevance and scope of these techniques have improved dramatically in recent years because of progress in discretization strategies, optimization algorithms, automatic differentiation, software availability, and the power of personal computers. Numerical Methods in Sensitivity Analysis and Shape Optimization will be of interest to graduate students involved in mathematical modeling and simulation, as well as engineers and researchers in applied mathematics looking for an up-to-date introduction to optimization techniques, sensitivity analysis, and optimal design.
This revised edition offers an approach to information theory that is more general than the classical approach of Shannon. Classically, information is defined for an alphabet of symbols or for a set of mutually exclusive propositions (a partition of the probability space ) with corresponding probabilities adding up to 1. The new definition is given for an arbitrary cover of , i.e. for a set of possibly overlapping propositions. The generalized information concept is called novelty and it is accompanied by two concepts derived from it, designated as information and surprise, which describe "opposite" versions of novelty, information being related more to classical information theory and surprise being related more to the classical concept of statistical significance. In the discussion of these three concepts and their interrelations several properties or classes of covers are defined, which turn out to be lattices. The book also presents applications of these concepts, mostly in statistics and in neuroscience.
Gaussian linear modelling cannot address current signal processing demands. In moderncontexts, suchasIndependentComponentAnalysis(ICA), progresshasbeen made speci?cally by imposing non-Gaussian and/or non-linear assumptions. Hence, standard Wiener and Kalman theories no longer enjoy their traditional hegemony in the ?eld, revealing the standard computational engines for these problems. In their place, diverse principles have been explored, leading to a consequent diversity in the implied computational algorithms. The traditional on-line and data-intensive pre- cupations of signal processing continue to demand that these algorithms be tractable. Increasingly, full probability modelling (the so-called Bayesian approach)-or partial probability modelling using the likelihood function-is the pathway for - sign of these algorithms. However, the results are often intractable, and so the area of distributional approximation is of increasing relevance in signal processing. The Expectation-Maximization (EM) algorithm and Laplace approximation, for ex- ple, are standard approaches to handling dif?cult models, but these approximations (certainty equivalence, and Gaussian, respectively) are often too drastic to handle the high-dimensional, multi-modal and/or strongly correlated problems that are - countered. Since the 1990s, stochastic simulation methods have come to dominate Bayesian signal processing. Markov Chain Monte Carlo (MCMC) sampling, and - lated methods, are appreciated for their ability to simulate possibly high-dimensional distributions to arbitrary levels of accuracy. More recently, the particle ?ltering - proach has addressed on-line stochastic simulation. Nevertheless, the wider acce- ability of these methods-and, to some extent, Bayesian signal processing itself- has been undermined by the large computational demands they typically mak
Written to reveal statistical deceptions often thrust upon
unsuspecting journalists, this book views the use of numbers from a
public perspective. Illustrating how the statistical naivete of
journalists often nourishes quantitative misinformation, the
author's intent is to make journalists more critical appraisers of
numerical data so that in reporting them they do not deceive the
public. The book frequently uses actual reported examples of
misused statistical data reported by mass media and describes how
journalists can avoid being taken in by them. Because reports of
survey findings seldom give sufficient detail of methods on the
actual questions asked, this book elaborates on questions reporters
should ask about methodology and how to detect biased questions
before reporting the findings to the public. As such, it may be
looked upon as an elements of style for reporting statistics.
This book provides a comprehensive guidance for the use of sound statistical methods and for the evaluation of fatigue data of welded components and structures obtained under constant amplitude loading and used to produce S-N curves. Recommendations for analyzing fatigue data are available, although they do not deal with all the statistical treatments that may be required to utilize fatigue test results, and none of them offers specific guidelines for analyzing fatigue data obtained from tests on welded specimens. For an easy use, working sheets are provided to assist in the proper statistical assessment of experimental fatigue data concerning practical problems giving the procedure and a numerical application as illustration.Â
This book introduces the fundamentals of the technology satisfaction model (TSM), supporting readers in applying the Rasch model and Structural Equation Modelling (SEM) - a multivariate technique - to higher education (HE) research. User satisfaction is traditionally measured along a single dimension. However, the TSM includes digital technologies for teaching, learning and research across three dimensions: computer efficacy, perceived ease of use and perceived usefulness. Establishing relationships among these factors is a challenge. Although commonly used in psychology to trace relationships, Rasch and SEM approaches are rarely used in educational technology or library and information science. This book, therefore, shows that combining these two analytical tools offers researchers better options for measurement and generalization in HE research. This title presents theoretical and methodological insight of use to researchers in HE.
For years, organizations have struggled to make sense out their data. IT projects designed to provide employees with dashboards, KPIs, and business-intelligence tools often take a year or more to reach the finish line...if they get there at all. This has always been a problem. Today, though, it's downright unacceptable. The world changes faster than ever. Speed has never been more important. By adhering to antiquated methods, firms lose the ability to see nascent trends and act upon them until it's too late. But what if the process of turning raw data into meaningful insights didn't have to be so painful, time-consuming, and frustrating? What if there were a better way to do analytics? Fortunately, you're in luck...Analytics: The Agile Way is the eighth book from award-winning author and Arizona State University professor Phil Simon. Analytics: The Agile Waydemonstrates how progressive organizations such as Google, Nextdoor, and others approach analytics in a fundamentally different way. They are applying the same Agile techniques that software developers have employed for years. They have replaced large batches in favor of smaller ones...and their results will astonish you. Through a series of case studies and examples, Analytics: The Agile Way demonstrates the benefits of this new analytics mind-set: superior access to information, quicker insights, and the ability to spot trends far ahead of your competitors.
The choice of examples used in this text clearly illustrate its use for a one-year graduate course. The material to be presented in the classroom constitutes a little more than half the text, while the rest of the text provides background, offers different routes that could be pursued in the classroom, as well as additional material that is appropriate for self-study. Of particular interest is a presentation of the major central limit theorems via Steins method either prior to or alternative to a characteristic function presentation. Additionally, there is considerable emphasis placed on the quantile function as well as the distribution function, with both the bootstrap and trimming presented. The section on martingales covers censored data martingales.
Summarizes information scattered in the technical literature on a subject too new to be included in most textbooks, but which is of interest to statisticians, and those who use statistics in science and education, at an advanced undergraduate or higher level. Overviews recent research on constructin
This book provides a new grade methodology for intelligent data analysis. It introduces a specific infrastructure of concepts needed to describe data analysis models and methods. This monograph is the only book presently available covering both the theory and application of grade data analysis and therefore aiming both at researchers, students, as well as applied practitioners. The text is richly illustrated through examples and case studies and includes a short introduction to software implementing grade methods, which can be downloaded from the editors.
This book includes discussions related to solutions of such tasks as: probabilistic description of the investment function; recovering the income function from GDP estimates; development of models for the economic cycles; selecting the time interval of pseudo-stationarity of cycles; estimating characteristics/parameters of cycle models; analysis of accuracy of model factors. All of the above constitute the general principles of a theory explaining the phenomenon of economic cycles and provide mathematical tools for their quantitative description. The introduced theory is applicable to macroeconomic analyses as well as econometric estimations of economic cycles.
The proceedings of this conference contain keynote addresses on recent developments in geotechnical reliability and limit state design in geotechnics. It also contains invited lectures on such topics as modelling of soil variability, simulation of random fields and probability of rock joints. Contents: Keynote addresses on recent development on geotechnical reliability and limit state design in geotechnics, and invited lectures on modelling of soil variability, simulation of random field, probabilistic of rock joints, and probabilistic design of foundations and slopes. Other papers on analytical techniques in geotechnical reliability, modelling of soil properties, and probabilistic analysis of slopes, embankments and foundations. |
You may like...
Song For Sarah - Lessons From My Mother
Jonathan Jansen, Naomi Jansen
Hardcover
(3)
Bioeconomical Solutions and Investments…
Jose G. Vargas-Hernandez, Justyna Anna Zdunek-Wielgoaska
Hardcover
R4,850
Discovery Miles 48 500
Spatial Regression Analysis Using…
Daniel A. Griffith, Yongwan Chun, …
Paperback
R3,015
Discovery Miles 30 150
Reflections on 21st Century Human…
Mahabir S. Jaglan, Rajeshwari
Hardcover
R3,850
Discovery Miles 38 500
Green Blockchain Technology for…
Saravanan Krishnan, Raghvendra Kumar Kumar, …
Paperback
R2,482
Discovery Miles 24 820
|