Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Science & Mathematics > Mathematics > Probability & statistics
While continuing to focus on methods of testing for two-sided equivalence, Testing Statistical Hypotheses of Equivalence and Noninferiority, Second Edition gives much more attention to noninferiority testing. It covers a spectrum of equivalence testing problems of both types, ranging from a one-sample problem with normally distributed observations of fixed known variance to problems involving several dependent or independent samples and multivariate data. Along with expanding the material on noninferiority problems, this edition includes new chapters on equivalence tests for multivariate data and tests for relevant differences between treatments. A majority of the computer programs offered online are now available not only in SAS or Fortran but also as R scripts or as shared objects that can be called within the R system. This book provides readers with a rich repertoire of efficient solutions to specific equivalence and noninferiority testing problems frequently encountered in the analysis of real data sets. It first presents general approaches to problems of testing for noninferiority and two-sided equivalence. Each subsequent chapter then focuses on a specific procedure and its practical implementation. The last chapter describes basic theoretical results about tests for relevant differences as well as solutions for some specific settings often arising in practice. Drawing from real-life medical research, the author uses numerous examples throughout to illustrate the methods.
This volume has its origin in the Seventeenth International Workshop on Maximum Entropy and Bayesian Methods, MAXENT 97. The workshop was held at Boise State University in Boise, Idaho, on August 4 -8, 1997. As in the past, the purpose of the workshop was to bring together researchers in different fields to present papers on applications of Bayesian methods (these include maximum entropy) in science, engineering, medicine, economics, and many other disciplines. Thanks to significant theoretical advances and the personal computer, much progress has been made since our first Workshop in 1981. As indicated by several papers in these proceedings, the subject has matured to a stage in which computational algorithms are the objects of interest, the thrust being on feasibility, efficiency and innovation. Though applications are proliferating at a staggering rate, some in areas that hardly existed a decade ago, it is pleasing that due attention is still being paid to foundations of the subject. The following list of descriptors, applicable to papers in this volume, gives a sense of its contents: deconvolution, inverse problems, instrument (point-spread) function, model comparison, multi sensor data fusion, image processing, tomography, reconstruction, deformable models, pattern recognition, classification and group analysis, segmentation/edge detection, brain shape, marginalization, algorithms, complexity, Ockham's razor as an inference tool, foundations of probability theory, symmetry, history of probability theory and computability. MAXENT 97 and these proceedings could not have been brought to final form without the support and help of a number of people.
This introductory textbook is designed for a one-semester course on queueing theory that does not require a course on stochastic processes as a prerequisite. By integrating the necessary background on stochastic processes with the analysis of models, the work provides a sound foundational introduction to the modeling and analysis of queueing systems for a broad interdisciplinary audience of students in mathematics, statistics, and applied disciplines such as computer science, operations research, and engineering. This edition includes additional topics in methodology and applications. Key features: * An introductory chapter including a historical account of the growth of queueing theory in more than 100 years. * A modeling-based approach with emphasis on identification of models * Rigorous treatment of the foundations of basic models commonly used in applications with appropriate references for advanced topics. * A chapter on matrix-analytic method as an alternative to the traditional methods of analysis of queueing systems. * A comprehensive treatment of statistical inference for queueing systems. * Modeling exercises and review exercises when appropriate. The second edition of An Introduction of Queueing Theory may be used as a textbook by first-year graduate students in fields such as computer science, operations research, industrial and systems engineering, as well as related fields such as manufacturing and communications engineering. Upper-level undergraduate students in mathematics, statistics, and engineering may also use the book in an introductory course on queueing theory. With its rigorous coverage of basic material and extensive bibliography of the queueing literature, the work may also be useful to applied scientists and practitioners as a self-study reference for applications and further research. "...This book has brought a freshness and novelty as it deals mainly with modeling and analysis in applications as well as with statistical inference for queueing problems. With his 40 years of valuable experience in teaching and high level research in this subject area, Professor Bhat has been able to achieve what he aimed: to make [the work] somewhat different in content and approach from other books." - Assam Statistical Review of the first edition
Description of basic ROC methodology; R and STATA code Example Datasets Not too technical Many topics not included in other books
The aim of this book is to provide a strong theoretical support for understanding and analyzing the behavior of evolutionary algorithms, as well as for creating a bridge between probability, set-oriented numerics and evolutionary computation. The volume encloses a collection of contributions that were presented at the EVOLVE 2011 international workshop, held in Luxembourg, May 25-27, 2011, coming from invited speakers and also from selected regular submissions. The aim of EVOLVE is to unify the perspectives offered by probability, set oriented numerics and evolutionary computation. EVOLVE focuses on challenging aspects that arise at the passage from theory to new paradigms and practice, elaborating on the foundations of evolutionary algorithms and theory-inspired methods merged with cutting-edge techniques that ensure performance guarantee factors. EVOLVE is also intended to foster a growing interest for robust and efficient methods with a sound theoretical background. The chapters enclose challenging theoretical findings, concrete optimization problems as well as new perspectives. By gathering contributions from researchers with different backgrounds, the book is expected to set the basis for a unified view and vocabulary where theoretical advancements may echo in different domains.
Data mining is the process of extracting hidden patterns from data, and it's commonly used in business, bioinformatics, counter-terrorism, and, increasingly, in professional sports. First popularized in Michael Lewis' best-selling Moneyball: The Art of Winning An Unfair Game, it is has become an intrinsic part of all professional sports the world over, from baseball to cricket to soccer. While an industry has developed based on statistical analysis services for any given sport, or even for betting behavior analysis on these sports, no research-level book has considered the subject in any detail until now. Sports Data Mining brings together in one place the state of the art as it concerns an international array of sports: baseball, football, basketball, soccer, greyhound racing are all covered, and the authors (including Hsinchun Chen, one of the most esteemed and well-known experts in data mining in the world) present the latest research, developments, software available, and applications for each sport. They even examine the hidden patterns in gaming and wagering, along with the most common systems for wager analysis.
By assuming it is possible to understand regression analysis without fully comprehending all its underlying proofs and theories, this introduction to the widely used statistical technique is accessible to readers who may have only a rudimentary knowledge of mathematics. Chapters discuss: -descriptive statistics using vector notation and the components
of a simple regression model;
'Et moi *...* si j'avait su comment en rcvenir. One service mathematics has rendered the je n'y serais point alle.' human race. It has put common sense back Jules Verne where it belongs, on the topmost shelf next to the dusty canistcr labelled 'discarded non- sense'. The scries is divergent; therefore we may be Eric T. Bell able to do something with it. O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non- linearities abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathematical physics ...'; 'One service logic has rendered com- puter science ...'; 'One service category theory has rendered mathematics ...'. All arguably true. And all statements obtainable this way form part of the raison d'etre of this series.
Uncertainty is an inherent feature of both properties of physical systems and the inputs to these systems that needs to be quantified for cost effective and reliable designs. The states of these systems satisfy equations with random entries, referred to as stochastic equations, so that they are random functions of time and/or space. The solution of stochastic equations poses notable technical difficulties that are frequently circumvented by heuristic assumptions at the expense of accuracy and rigor. The main objective of "Stochastic Systems" is to promoting the development of accurate and efficient methods for solving stochastic equations and to foster interactions between engineers, scientists, and mathematicians. To achieve these objectives "Stochastic Systems "presents: . A clear and brief review of essential concepts on probability theory, random functions, stochastic calculus, Monte Carlo simulation, and functional analysis . ""Probabilistic models for random variables and functions needed to formulate stochastic equations describing realistic problems in engineering and applied sciences . ""Practical methods for quantifying the uncertain parameters in the definition of stochastic equations, solving approximately these equations, and assessing the accuracy of approximate solutions "Stochastic Systems "provides key information for researchers, graduate students, and engineers who are interested in the formulation and solution of stochastic problems encountered in a broad range of disciplines. Numerous examples are used to clarify and illustrate theoretical concepts and methods for solving stochastic equations. The extensive bibliography and index at the end of the book constitute an ideal resource for both theoreticians and practitioners. "
Today, a major component of any project management effort is the combined use of qualitative and quantitative tools. While publications on qualitative approaches to project management are widely available, few project management books have focused on the quantitative approaches. This book represents the first major project management book with a practical focus on the quantitative approaches to project management. The book organizes quantitative techniques into an integrated framework for project planning, scheduling, and control. Numerous illustrative examples are presented. Topics covered in the book include PERT/CPM/PDM and extensions, mathematical project scheduling, heuristic project scheduling, project economics, statistical data analysis for project planning, computer simulation, assignment and transportation problems, and learning curve analysis. Chapter one gives a brief overview of project management, presenting a general-purpose project management model. Chapter two covers CPM, PERT, and PDM network techniques. Chapter three covers project scheduling subject to resource constraints. Chapter four covers project optimization. Chapter five discusses economic analysis for project planning and control. Chapter six discusses learning curve analysis. Chapter seven covers statistical data analysis for project planning and control. Chapter eight presents techniques for project analysis and selection. Tables and figures are used throughout the book to enhance the effectiveness of the discussions. This book is excellent as a textbook for upper-level undergraduate and graduate courses in Industrial Engineering, Engineering Management, and Business, and as a detailed, comprehensive guidefor corporate management.
This new edition of Stochastic Linear Programming: Models, Theory and Computation has been brought completely up to date, either dealing with or at least referring to new material on models and methods, including DEA with stochastic outputs modeled via constraints on special risk functions (generalizing chance constraints, ICC's and CVaR constraints), material on Sharpe-ratio, and Asset Liability Management models involving CVaR in a multi-stage setup. To facilitate use as a text, exercises are included throughout the book, and web access is provided to a student version of the authors' SLP-IOR software. Additionally, the authors have updated the Guide to Available Software, and they have included newer algorithms and modeling systems for SLP. The book is thus suitable as a text for advanced courses in stochastic optimization, and as a reference to the field. From Reviews of the First Edition: "The book presents a comprehensive study of stochastic linear optimization problems and their applications. ... The presentation includes geometric interpretation, linear programming duality, and the simplex method in its primal and dual forms. ... The authors have made an effort to collect ... the most useful recent ideas and algorithms in this area. ... A guide to the existing software is included as well." (Darinka Dentcheva, Mathematical Reviews, Issue 2006 c) "This is a graduate text in optimisation whose main emphasis is in stochastic programming. The book is clearly written. ... This is a good book for providing mathematicians, economists and engineers with an almost complete start up information for working in the field. I heartily welcome its publication. ... It is evident that this book will constitute an obligatory reference source for the specialists of the field." (Carlos Narciso Bouza Herrera, Zentralblatt MATH, Vol. 1104 (6), 2007)
This monograph surveys the theory of quantitative homogenization for second-order linear elliptic systems in divergence form with rapidly oscillating periodic coefficients in a bounded domain. It begins with a review of the classical qualitative homogenization theory, and addresses the problem of convergence rates of solutions. The main body of the monograph investigates various interior and boundary regularity estimates that are uniform in the small parameter e>0. Additional topics include convergence rates for Dirichlet eigenvalues and asymptotic expansions of fundamental solutions, Green functions, and Neumann functions. The monograph is intended for advanced graduate students and researchers in the general areas of analysis and partial differential equations. It provides the reader with a clear and concise exposition of an important and currently active area of quantitative homogenization.
This book covers several bases at once. It is useful as a textbook for a second course in experimental optimization techniques for industrial production processes. In addition, it is a superb reference volume for use by professors and graduate students in Industrial Engineering and Statistics departments. It will also be of huge interest to applied statisticians, process engineers, and quality engineers working in the electronics and biotech manufacturing industries. In all, it provides an in-depth presentation of the statistical issues that arise in optimization problems, including confidence regions on the optimal settings of a process, stopping rules in experimental optimization, and more.
This book provides a systematic in-depth analysis of nonparametric regression with random design. It covers almost all known estimates such as classical local averaging estimates including kernel, partitioning and nearest neighbor estimates, least squares estimates using splines, neural networks and radial basis function networks, penalized least squares estimates, local polynomial kernel estimates, and orthogonal series estimates. The emphasis is on distribution-free properties of the estimates. Most consistency results are valid for all distributions of the data. Whenever it is not possible to derive distribution-free results, as in the case of the rates of convergence, the emphasis is on results which require as few constrains on distributions as possible, on distribution-free inequalities, and on adaptation. The relevant mathematical theory is systematically developed and requires only a basic knowledge of probability theory. The book will be a valuable reference for anyone interested in nonparametric regression and is a rich source of many useful mathematical techniques widely scattered in the literature. In particular, the book introduces the reader to empirical process theory, martingales and approximation properties of neural networks.
This book covers classic epidemiological designs that use a reference/control group, including case-control, case-cohort, nested case-control and variations of these designs, such as stratified and two-stage designs. It presents a unified view of these sampling designs as representations of an underlying cohort or target population of interest. This enables various extended designs to be introduced and analysed with a similar approach: extreme sampling on the outcome (extreme case-control design) or on the exposure (exposure-enriched, exposure-density, countermatched), designs that re-use prior controls and augmentation sampling designs. Further extensions exploit aggregate data for efficient cluster sampling, accommodate time-varying exposures and combine matched and unmatched controls. Self-controlled designs, including case-crossover, self-controlled case series and exposure-crossover, are also presented. The test-negative design for vaccine studies and the use of negative controls for bias assessment are introduced and discussed. This book is intended for graduate students in biostatistics, epidemiology and related disciplines, or for health researchers and data analysts interested in extending their knowledge of study design and data analysis skills. This book Bridges the gap between epidemiology and the more mathematically oriented biostatistics books. Assembles the wealth of epidemiological knowledge about observational study designs that is scattered over several decades of scientific publications. Illustrates the performance of methods in real research applications. Provides guidelines for implementation in standard software packages (Stata, R). Includes numerous exercises, covering simple mathematical proofs, consideration of proposed or published designs, and practical data analysis.
This book is devoted to the study of univariate distributions appropriate for the analyses of data known to be nonnegative. The book includes much material from reliability theory in engineering and survival analysis in medicine.
This book provides, as simply as possible, sound foundations for an in-depth understanding of reliability engineering with regard to qualitative analysis, modelling, and probabilistic calculations of safety and production systems. Drawing on the authors' extensive experience within the field of reliability engineering, it addresses and discusses a variety of topics, including: * Background and overview of safety and dependability studies; * Explanation and critical analysis of definitions related to core concepts; * Risk identification through qualitative approaches (preliminary hazard analysis, HAZOP, FMECA, etc.); * Modelling of industrial systems through static (fault tree, reliability block diagram), sequential (cause-consequence diagrams, event trees, LOPA, bowtie), and dynamic (Markov graphs, Petri nets) approaches; * Probabilistic calculations through state-of-the-art analytical or Monte Carlo simulation techniques; * Analysis, modelling, and calculations of common cause failure and uncertainties; * Linkages and combinations between the various modelling and calculation approaches; * Reliability data collection and standardization. The book features illustrations, explanations, examples, and exercises to help readers gain a detailed understanding of the topic and implement it into their own work. Further, it analyses the production availability of production systems and the functional safety of safety systems (SIL calculations), showcasing specific applications of the general theory discussed. Given its scope, this book is a valuable resource for engineers, software designers, standard developers, professors, and students.
The primary focus here is on log-linear models for contingency tables, but in this second edition, greater emphasis has been placed on logistic regression. The book explores topics such as logistic discrimination and generalised linear models, and builds upon the relationships between these basic models for continuous data and the analogous log-linear and logistic regression models for discrete data. It also carefully examines the differences in model interpretations and evaluations that occur due to the discrete nature of the data. Sample commands are given for analyses in SAS, BMFP, and GLIM, while numerous data sets from fields as diverse as engineering, education, sociology, and medicine are used to illustrate procedures and provide exercises. Throughoutthe book, the treatment is designed for students with prior knowledge of analysis of variance and regression.
This volume of the Selected Papers is a product of the XIX Congress of the Portuguese Statistical Society, held at the Portuguese town of Nazare, from September 28 to October 1, 2011. All contributions were selected after a thorough peer-review process. It covers a broad scope of papers in the areas of Statistical Science, Probability and Stochastic Processes, Extremes and Statistical Applications."
A recent development in SDC-related problems is the establishment of intelligent SDC models and the intensive use of LMI-based convex optimization methods. Within this theoretical framework, control parameter determination can be designed and stability and robustness of closed-loop systems can be analyzed. This book describes the new framework of SDC system design and provides a comprehensive description of the modelling of controller design tools and their real-time implementation. It starts with a review of current research on SDC and moves on to some basic techniques for modelling and controller design of SDC systems. This is followed by a description of controller design for fixed-control-structure SDC systems, PDF control for general input- and output-represented systems, filtering designs, and fault detection and diagnosis (FDD) for SDC systems. Many new LMI techniques being developed for SDC systems are shown to have independent theoretical significance for robust control and FDD problems.
From the Foreword: The chief aim of this book is to present the reader with an integrated system of methods dealing with geographical statistics (geostatistics') and their applications. It sums up developments based on the vast experience accumulated by Professor Bachi over several decades of research. Interest in the quantitative locational aspects of geography and in the common ground of geography and statistics has grown rapidly, involving an ever-increasing spectrum of scientific disciplines ... the present volume will fill a genuine need - as a textbook, as a reference work, and as a practical aid for geographers, applied statisticians, demographers, ecologists, regional planners, economists, professional staff of official statistical agencies, and others.' - E. Peritz, G. Nathan, N. Kadmon.
This book presents a new branch of mathematical statistics aimed at constructing unimprovable methods of multivariate analysis, multi-parametric estimation, and discriminant and regression analysis. In contrast to the traditional consistent Fisher method of statistics, the essentially multivariate technique is based on the decision function approach by A. Wald. Developing this new method for high dimensions, comparable in magnitude with sample size, provides stable approximately unimprovable procedures in some wide classes, depending on an arbitrary function. A remarkable fact is established: for high-dimensional problems, under some weak restrictions on the variable dependence, the standard quality functions of regularized multivariate procedures prove to be independent of distributions. For the first time in the history of statistics, this opens the possibility to construct unimprovable procedures free from distributions. Audience: This work will be of interest to researchers and graduate students whose work involves statistics and probability, reliability and risk analysis, econometrics, machine learning, medical statistics, and various applications of multivariate analysis.
Librarians understand the need to store, use and analyze data related to their collection, patrons and institution, and there has been consistent interest over the last 10 years to improve data management, analysis, and visualization skills within the profession. However, librarians find it difficult to move from out-of-the-box proprietary software applications to the skills necessary to perform the range of data science actions in code. This book will focus on teaching R through relevant examples and skills that librarians need in their day-to-day lives that includes visualizations but goes much further to include web scraping, working with maps, creating interactive reports, machine learning, and others. While there’s a place for theory, ethics, and statistical methods, librarians need a tool to help them acquire enough facility with R to utilize data science skills in their daily work, no matter what type of library they work at (academic, public or special). By walking through each skill and its application to library work before walking the reader through each line of code, this book will support librarians who want to apply data science in their daily work. Hands-On Data Science for Librarians is intended for librarians (and other information professionals) in any library type (public, academic or special) as well as graduate students in library and information science (LIS). Key Features: Only data science book available geared toward librarians that includes step-by-step code examples Examples include all library types (public, academic, special) Relevant datasets Accessible to non-technical professionals Focused on job skills and their applications
This book covers statistical consequences of breaches of research integrity such as fabrication and falsification of data, and researcher glitches summarized as questionable research practices. It is unique in that it discusses how unwarranted data manipulation harms research results and that questionable research practices are often caused by researchers' inadequate mastery of the statistical methods and procedures they use for their data analysis. The author's solution to prevent problems concerning the trustworthiness of research results, no matter how they originated, is to publish data in publicly available repositories and encourage researchers not trained as statisticians not to overestimate their statistical skills and resort to professional support from statisticians or methodologists. The author discusses some of his experiences concerning mutual trust, fear of repercussions, and the bystander effect as conditions limiting revelation of colleagues' possible integrity breaches. He explains why people are unable to mimic real data and why data fabrication using statistical models stills falls short of credibility. Confirmatory and exploratory research and the usefulness of preregistration, and the counter-intuitive nature of statistics are discussed. The author questions the usefulness of statistical advice concerning frequentist hypothesis testing, Bayes-factor use, alternative statistics education, and reduction of situational disturbances like performance pressure, as stand-alone means to reduce questionable research practices when researchers lack experience with statistics. |
You may like...
Time Series Analysis - With Applications…
Jonathan D. Cryer, Kung-Sik Chan
Hardcover
R2,549
Discovery Miles 25 490
Advances in Quantum Monte Carlo
Shigenori Tanaka, Stuart M. Rothstein, …
Hardcover
R5,411
Discovery Miles 54 110
Mathematical Statistics with…
William Mendenhall, Dennis Wackerly, …
Paperback
Quantitative statistical techniques
A. Swanepoel, F.L. Vivier, …
Paperback
The Practice of Statistics for Business…
David S Moore, George P. McCabe, …
Mixed media product
R2,284
Discovery Miles 22 840
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
|