![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
Sequential Experimentation in Clinical Trials: Design and Analysis is developed from decades of work in research groups, statistical pedagogy, and workshop participation. Different parts of the book can be used for short courses on clinical trials, translational medical research, and sequential experimentation. The authors have successfully used the book to teach innovative clinical trial designs and statistical methods for Statistics Ph.D. students at Stanford University. There are additional online supplements for the book that include chapter-specific exercises and information. Sequential Experimentation in Clinical Trials: Design and Analysis covers the much broader subject of sequential experimentation that includes group sequential and adaptive designs of Phase II and III clinical trials, which have attracted much attention in the past three decades. In particular, the broad scope of design and analysis problems in sequential experimentation clearly requires a wide range of statistical methods and models from nonlinear regression analysis, experimental design, dynamic programming, survival analysis, resampling, and likelihood and Bayesian inference. The background material in these building blocks is summarized in Chapter 2 and Chapter 3 and certain sections in Chapter 6 and Chapter 7. Besides group sequential tests and adaptive designs, the book also introduces sequential change-point detection methods in Chapter 5 in connection with pharmacovigilance and public health surveillance. Together with dynamic programming and approximate dynamic programming in Chapter 3, the book therefore covers all basic topics for a graduate course in sequential analysis designs.
Intended as a first course in probability at post-calculus level, this book is of special interest to students majoring in computer science as well as in mathematics. Since calculus is used only occasionally in the text, students who have forgotten their calculus can nevertheless easily understand the book, and its slow, gentle style and clear exposition will also appeal. Basic concepts such as counting, independence, conditional probability, random variables, approximation of probabilities, generating functions, random walks and Markov chains are all clearly explained and backed by many worked exercises. The 1,196 numerical answers to the 405 exercises, many with multiple parts, are included at the end of the book, and throughout, there are various historical comments on the study of probability. These include biographical information on such famous contributors as Fermat, Pascal, the Bernoullis, DeMoivre, Bayes, Laplace, Poisson, and Markov. Of interest to a wide range of readers and useful in many undergraduate programs.
Statistical Methods for Long Term Memory Processes covers the diverse statistical methods and applications for data with long-range dependence. Presenting material that previously appeared only in journals, the author provides a concise and effective overview of probabilistic foundations, statistical methods, and applications. The material emphasizes basic principles and practical applications and provides an integrated perspective of both theory and practice. This book explores data sets from a wide range of disciplines, such as hydrology, climatology, telecommunications engineering, and high-precision physical measurement. The data sets are conveniently compiled in the index, and this allows readers to view statistical approaches in a practical context.
This book is an easy-to-read reference providing a link between functional analysis and diffusion processes. More precisely, the book takes readers to a mathematical crossroads of functional analysis (macroscopic approach), partial differential equations (mesoscopic approach), and probability (microscopic approach) via the mathematics needed for the hard parts of diffusion processes. This work brings these three fields of analysis together and provides a profound stochastic insight (microscopic approach) into the study of elliptic boundary value problems. The author does a massive study of diffusion processes from a broad perspective and explains mathematical matters in a more easily readable way than one usually would find. The book is amply illustrated; 14 tables and 141 figures are provided with appropriate captions in such a fashion that readers can easily understand powerful techniques of functional analysis for the study of diffusion processes in probability. The scope of the author's work has been and continues to be powerful methods of functional analysis for future research of elliptic boundary value problems and Markov processes via semigroups. A broad spectrum of readers can appreciate easily and effectively the stochastic intuition that this book conveys. Furthermore, the book will serve as a sound basis both for researchers and for graduate students in pure and applied mathematics who are interested in a modern version of the classical potential theory and Markov processes. For advanced undergraduates working in functional analysis, partial differential equations, and probability, it provides an effective opening to these three interrelated fields of analysis. Beginning graduate students and mathematicians in the field looking for a coherent overview will find the book to be a helpful beginning. This work will be a major influence in a very broad field of study for a long time.
This IMA Volume in Mathematics and its Applications DIRECTIONS IN ROBUST STATISTICS AND DIAGNOSTICS is based on the proceedings of the first four weeks of the six week IMA 1989 summer program "Robustness, Diagnostics, Computing and Graphics in Statistics." An important objective of the organizers was to draw a broad set of statisticians working in robustness or diagnostics into collaboration on the challenging problems in these areas, particularly on the interface between them. We thank the organizers of the robustness and diagnostics program Noel Cressie, Thomas P. Hettmansperger, Peter J. Huber, R. Douglas Martin, and especially Werner Stahel and Sanford Weisberg who edited the proceedings. A vner Friedman Willard Miller, Jr. PREFACE Central themes of all statistics are estimation, prediction, and making decisions under uncertainty. A standard approach to these goals is through parametric mod elling. Parametric models can give a problem sufficient structure to allow standard, well understood paradigms to be applied to make the required inferences. If, how ever, the parametric model is not completely correct, then the standard inferential methods may not give reasonable answers. In the last quarter century, particularly with the advent of readily available computing, more attention has been paid to the problem of inference when the parametric model used is not correctly specified."
Volume I of this two-volume text and reference work begins by providing a foundation in measure and integration theory. It then offers a systematic introduction to probability theory, and in particular, those parts that are used in statistics. This volume discusses the law of large numbers for independent and non-independent random variables, transforms, special distributions, convergence in law, the central limit theorem for normal and infinitely divisible laws, conditional expectations and martingales. Unusual topics include the uniqueness and convergence theorem for general transforms with characteristic functions, Laplace transforms, moment transforms and generating functions as special examples. The text contains substantive applications, e.g., epidemic models, the ballot problem, stock market models and water reservoir models, and discussion of the historical background. The exercise sets contain a variety of problems ranging from simple exercises to extensions of the theory.
High levels of uncertainty are a trademark of geological investigations, such as the search for oil, diamonds, and uranium. So business ventures related to geology, such as mineral exploration and mining, are naturally associated with higher risks than more traditional entrepreneurial ventures in industry and economy. There are also a number of dangerous natural hazards, e.g. earthquakes, volcanic activities, and inundations, that are the direct result of geological processes. It is of paramount interest to study them all, to describe them, to understand their origin and - if possible - to predict them. While uncertainties, geological risks and natural hazards are often mentioned in geological textbooks, conferences papers, and articles, no comprehensive and systematic evaluation has so far been attempted. This book, written at an appropriately sophisticated level to deal with complexity of these problems, presents a detailed evaluation of the entire problem, discussing it from both, the geological and the mathematical aspects.
This book covers research work spanning the breadth of ventures, a variety of challenges and the finest of techniques used to address data and analytics, by subject matter experts from the business world. The content of this book highlights the real-life business problems that are relevant to any industry and technology environment. This book helps us become a contributor to and accelerator of artificial intelligence, data science and analytics, deploy a structured life-cycle approach to data related issues, apply appropriate analytical tools & techniques to analyze data and deliver solutions with a difference. It also brings out the story-telling element in a compelling fashion using data and analytics. This prepares the readers to drive quantitative and qualitative outcomes and apply this mindset to various business actions in different domains such as energy, manufacturing, health care, BFSI, security, etc.
The second edition of this book includes revised, updated, and additional material on the structure, theory, and application of classes of dynamic models in Bayesian time series analysis and forecasting. In addition to wide ranging updates to central material, the second edition includes many more exercises and covers new topics at the research and application frontiers of Bayesian forecastings.
Volume II of this two-volume text and reference work concentrates on the applications of probability theory to statistics, e.g., the art of calculating densities of complicated transformations of random vectors, exponential models, consistency of maximum estimators, and asymptotic normality of maximum estimators. It also discusses topics of a pure probabilistic nature, such as stochastic processes, regular conditional probabilities, strong Markov chains, random walks, and optimal stopping strategies in random games. Unusual topics include the transformation theory of densities using Hausdorff measures, the consistency theory using the upper definition function, and the asymptotic normality of maximum estimators using twice stochastic differentiability. With an emphasis on applications to statistics, this is a continuation of the first volume, though it may be used independently of that book. Assuming a knowledge of linear algebra and analysis, as well as a course in modern probability, Volume II looks at statistics from a probabilistic point of view, touching only slightly on the practical computation aspects.
Bioinformatics is the study of biological information and biological systems - such as of the relationships between the sequence, structure and function of genes and proteins. The subject has seen tremendous development in recent years, and there are ever-increasing needs for good understanding of quantitative methods in the study of proteins. "Protein Bioinformatics: An Algorithmic Approach to Sequence and Structure Analysis" takes the novel approach of covering both the sequence and structure analysis of proteins in one volume and from an algorithmic perspective. Provides a comprehensive introduction to the analysis of protein sequences and structures. Provides an integrated presentation of methodology, examples, exercises and applications. Emphasises the algorithmic rather than mathematical aspects of the methods described. Covers comparison and alignment of protein sequences and structures as well as protein structure prediction focusing on threading approaches. Written in an accessible yet rigorous style, suitable for biologists, mathematicians and computer scientists alike. Suitable both for developers and users of bioinformatics tools. Supported by a Web site featuring exercises, solutions, images, and computer programs. "Protein Bioinformatics: An Algorithmic Approach to Sequence and Structure Analysis" is ideally suited for advanced undergraduate and graduate students of bioinformatics, statistics, mathematics and computer science. It also provides an excellent introduction and reference source on the subject for practitioners and researchers.
The statistical methods used to evaluate and compare different methods of measurement are a vital common component of all methods of scientific research. This book provides a practically orientated guide to the statistical models used in the evaluation of measurement errors with a wide variety of illustrative examples taken from across the sciences. After introducing basic concepts, such as precision, reproducibility and reliability, a detailed discussion of the sources of variability of measurements and associated variance components models is provided. The central chapters deal with the design and analysis of method comparison studies (concentrating primarily on quantitative measurements) ranging from simple paired comparisons to more complex studies involving three or more methods. This leads on to a review of methods for categorical measures.
The material accumulated and presented in this volume can be ex plained easily. At the start of my graduate studies in the early 1950s, I Grenander's (1950) thesis, and was much attracted to the came across entire subject considered there. I then began preparing for the neces sary mathematics to appreciate and possibly make some contributions to the area. Thus after a decade of learning and some publications on the way, I wanted to write a modest monograph complementing Grenander's fundamental memoir. So I took a sabbatical leave from my teaching position at the Carnegie-Mellon University, encouraged by an Air Force Grant for the purpose, and followed by a couple of years more learning opportunity at the Institute for Advanced Study to complete the project. As I progressed, the plan grew larger needing a substantial background material which was made into an independent initial volume in (1979). In its preface I said: "My intension was to present the following material as the first part of a book treating the In ference Theory of stochastic processes, but the latter account has now receded to a distant future," namely for two more decades Meanwhile, a much enlarged second edition of that early work has appeared (1995), and now I am able to present the main part of the original plan."
This work details the statistical inference of linear models including parameter estimation, hypothesis testing, confidence intervals, and prediction. The authors discuss the application of statistical theories and methodologies to various linear models such as the linear regression model, the analysis of variance model, the analysis of covariance model, and the variance components model.
This book should be of interest to statistics lecturers who want ready-made data sets complete with notes for teaching.
The modern theory of Sequential Analysis came into existence simultaneously in the United States and Great Britain in response to demands for more efficient sampling inspection procedures during World War II. The develop ments were admirably summarized by their principal architect, A. Wald, in his book Sequential Analysis (1947). In spite of the extraordinary accomplishments of this period, there remained some dissatisfaction with the sequential probability ratio test and Wald's analysis of it. (i) The open-ended continuation region with the concomitant possibility of taking an arbitrarily large number of observations seems intol erable in practice. (ii) Wald's elegant approximations based on "neglecting the excess" of the log likelihood ratio over the stopping boundaries are not especially accurate and do not allow one to study the effect oftaking observa tions in groups rather than one at a time. (iii) The beautiful optimality property of the sequential probability ratio test applies only to the artificial problem of testing a simple hypothesis against a simple alternative. In response to these issues and to new motivation from the direction of controlled clinical trials numerous modifications of the sequential probability ratio test were proposed and their properties studied-often by simulation or lengthy numerical computation. (A notable exception is Anderson, 1960; see III.7.) In the past decade it has become possible to give a more complete theoretical analysis of many of the proposals and hence to understand them better."
Design and Analysis in Educational Research Using jamovi is an integrated approach to learning about research design alongside statistical analysis concepts. Strunk and Mwavita maintain a focus on applied educational research throughout the text, with practical tips and advice on how to do high-quality quantitative research. Based on their successful SPSS version of the book, the authors focus on using jamovi in this version due to its accessibility as open source software, and ease of use. The book teaches research design (including epistemology, research ethics, forming research questions, quantitative design, sampling methodologies, and design assumptions) and introductory statistical concepts (including descriptive statistics, probability theory, sampling distributions), basic statistical tests (like z and t), and ANOVA designs, including more advanced designs like the factorial ANOVA and mixed ANOVA. This textbook is tailor-made for first-level doctoral courses in research design and analysis. It will also be of interest to graduate students in education and educational research. The book includes Support Material with downloadable data sets, and new case study material from the authors for teaching on race, racism, and Black Lives Matter, available at www.routledge.com/9780367723088.
This volume presents a practical and unified approach to categorical data analysis based on the Akaike Information Criterion (AIC) and the Akaike Bayesian Information Criterion (ABIC). Conventional procedures for categorical data analysis are often inappropriate because the classical test procedures employed are too closely related to specific models. The approach described in this volume enables actual problems encountered by data analysts to be handled much more successfully. Amongst various topics explicitly dealt with are the problem of variable selection for categorical data, a Bayesian binary regression, and a nonparametric density estimator and its application to nonparametric test problems. The practical utility of the procedure developed is demonstrated by considering its application to the analysis of various data. This volume complements the volume Akaike Information Criterion Statistics which has already appeared in this series. For statisticians working in mathematics, the social, behavioural, and medical sciences, and engineering.
Random Generation of Trees is about a field on the crossroads between computer science, combinatorics and probability theory. Computer scientists need random generators for performance analysis, simulation, image synthesis, etc. In this context random generation of trees is of particular interest. The algorithms presented here are efficient and easy to code. Some aspects of Horton--Strahler numbers, programs written in C and pictures are presented in the appendices. The complexity analysis is done rigorously both in the worst and average cases. Random Generation of Trees is intended for students in computer science and applied mathematics as well as researchers interested in random generation.
The author's research has been directed towards inference involving observables rather than parameters. In this book, he brings together his views on predictive or observable inference and its advantages over parametric inference. While the book discusses a variety of approaches to prediction including those based on parametric, nonparametric, and nonstochastic statistical models, it is devoted mainly to predictive applications of the Bayesian approach. It not only substitutes predictive analyses for parametric analyses, but it also presents predictive analyses that have no real parametric analogues. It demonstrates that predictive inference can be a critical component of even strict parametric inference when dealing with interim analyses. This approach to predictive inference will be of interest to statisticians, psychologists, econometricians, and sociologists.
Hardbound. This book is a result of recent developments in several fields. Mathematicians, statisticians, finance theorists, and economists found several interconnections in their research. The emphasis was on common methods, although the applications were also interrelated.The main topic is dynamic stochastic models, in which information arrives and decisions are made sequentially. This gives rise to what finance theorists call option value, what some economists label quasi-option value. Some papers extend the mathematical theory, some deal with new methods of economic analysis, while some present important applications, to natural resources in particular.
- The book discusses the recent techniques in NGS data analysis which is the most needed material by biologists (students and researchers) in the wake of numerous genomic projects and the trend toward genomic research. - The book includes both theory and practice for the NGS data analysis. So, readers will understand the concept and learn how to do the analysis using the most recent programs. - The steps of application workflows are written in a manner that can be followed for related projects. - Each chapter includes worked examples with real data available on the NCBI databases. Programming codes and outputs are accompanied with explanation. - The book content is suitable as teaching material for biology and bioinformatics students. Meets the requirements of a complete semester course on Sequencing Data Analysis Covers the latest applications for Next Generation Sequencing Covers data reprocessing, genome assembly, variant discovery, gene profiling, epigenetics, and metagenomics
This text, combining analysis and tools from mathematical probability, focuses on a systematic and novel exposition of a recent trend in pure and applied mathematics. The emphasis is on the unity of basis constructions and their expansions (bases which are computationally efficient), and on their use in several areas: from wavelets to fractals. The aim of this book is to show how to use processes from probability, random walks on branches, and their path-space measures in the study of convergence questions from harmonic analysis, with particular emphasis on the infinite products that arise in the analysis of wavelets. The book brings together tools from engineering (especially signal/image processing) and mathematics (harmonic analysis and operator theory). audience of students and workers in a variety of fields, meeting at the crossroads where they merge; hands-on approach with generous motivation; new pedagogical features to enhance teaching techniques and experience; includes more than 34 figures with detailed captions, illustrating the main ideas and visualizing the deeper connections in the subject; separate sections explain engineering terms to mathematicians and operator theory to engineers; and, interdisciplinary presentation and approach, combining central ideas from mathematical analysis (with a twist in the direction of operator theory and harmonic analysis), probability, computation, physics, and engineering. The presentation includes numerous exercises that are essential to reinforce fundamental concepts by helping both students and applied users practice sketching functions or iterative schemes, as well as to hone computational skills. Graduate students, researchers, applied mathematicians, engineers and physicists alike will benefit from this unique work in book form that fills a gap in the literature.
Praise for the First Edition "A very useful book for self study and reference." "Very well written. It is concise and really packs a lot of material in a valuable reference book." "An informative and well-written book . . . presented in an easy-to-understand style with many illustrative numerical examples taken from engineering and scientific studies." Practicing engineers and scientists often have a need to utilize statistical approaches to solving problems in an experimental setting. Yet many have little formal training in statistics. Statistical Design and Analysis of Experiments gives such readers a carefully selected, practical background in the statistical techniques that are most useful to experimenters and data analysts who collect, analyze, and interpret data. The First Edition of this now-classic book garnered praise in the field. Now its authors update and revise their text, incorporating readers’ suggestions as well as a number of new developments. Statistical Design and Analysis of Experiments, Second Edition emphasizes the strategy of experimentation, data analysis, and the interpretation of experimental results, presenting statistics as an integral component of experimentation from the planning stage to the presentation of conclusions. Giving an overview of the conceptual foundations of modern statistical practice, the revised text features discussions of:
Ideal for both students and professionals, this focused and cogent reference has proven to be an excellent classroom textbook with numerous examples. It deserves a place among the tools of every engineer and scientist working in an experimental setting.
The process of developing predictive models includes many stages. Most resources focus on the modeling algorithms but neglect other critical aspects of the modeling process. This book describes techniques for finding the best representations of predictors for modeling and for nding the best subset of predictors for improving model performance. A variety of example data sets are used to illustrate the techniques along with R programs for reproducing the results. |
You may like...
Advances in Quantum Monte Carlo
Shigenori Tanaka, Stuart M. Rothstein, …
Hardcover
R5,469
Discovery Miles 54 690
Statistics For Business And Economics
David Anderson, James Cochran, …
Paperback
(1)
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
Statistics for Management and Economics
Gerald Keller, Nicoleta Gaciu
Paperback
Order Statistics: Applications, Volume…
Narayanaswamy Balakrishnan, C.R. Rao
Hardcover
R3,377
Discovery Miles 33 770
Fundamentals of Social Research Methods
Claire Bless, Craig Higson-Smith, …
Paperback
|