0
Your cart

Your cart is empty

Browse All Departments
Price
  • R50 - R100 (2)
  • R100 - R250 (51)
  • R250 - R500 (353)
  • R500+ (13,831)
  • -
Status
Format
Author / Contributor
Publisher

Books > Science & Mathematics > Mathematics > Probability & statistics

Understanding Regression Analysis - A Conditional Distribution Approach (Hardcover): Peter H. Westfall, Andrea L. Arias Understanding Regression Analysis - A Conditional Distribution Approach (Hardcover)
Peter H. Westfall, Andrea L. Arias
R3,737 Discovery Miles 37 370 Ships in 12 - 17 working days

Understanding Regression Analysis unifies diverse regression applications including the classical model, ANOVA models, generalized models including Poisson, Negative binomial, logistic, and survival, neural networks, and decision trees under a common umbrella -- namely, the conditional distribution model. It explains why the conditional distribution model is the correct model, and it also explains (proves) why the assumptions of the classical regression model are wrong. Unlike other regression books, this one from the outset takes a realistic approach that all models are just approximations. Hence, the emphasis is to model Nature's processes realistically, rather than to assume (incorrectly) that Nature works in particular, constrained ways. Key features of the book include: Numerous worked examples using the R software Key points and self-study questions displayed "just-in-time" within chapters Simple mathematical explanations ("baby proofs") of key concepts Clear explanations and applications of statistical significance (p-values), incorporating the American Statistical Association guidelines Use of "data-generating process" terminology rather than "population" Random-X framework is assumed throughout (the fixed-X case is presented as a special case of the random-X case) Clear explanations of probabilistic modelling, including likelihood-based methods Use of simulations throughout to explain concepts and to perform data analyses This book has a strong orientation towards science in general, as well as chapter-review and self-study questions, so it can be used as a textbook for research-oriented students in the social, biological and medical, and physical and engineering sciences. As well, its mathematical emphasis makes it ideal for a text in mathematics and statistics courses. With its numerous worked examples, it is also ideally suited to be a reference book for all scientists.

An Introduction to Metric Spaces (Hardcover): Dhananjay Gopal, Aniruddha Deshmukh, Abhay S Ranadive, Shubham Yadav An Introduction to Metric Spaces (Hardcover)
Dhananjay Gopal, Aniruddha Deshmukh, Abhay S Ranadive, Shubham Yadav
R3,256 Discovery Miles 32 560 Ships in 12 - 17 working days

This book serves as a textbook for an introductory course in metric spaces for undergraduate or graduate students. The goal is to present the basics of metric spaces in a natural and intuitive way and encourage students to think geometrically while actively participating in the learning of this subject. In this book, the authors illustrated the strategy of the proofs of various theorems that motivate readers to complete them on their own. Bits of pertinent history are infused in the text, including brief biographies of some of the central players in the development of metric spaces. The textbook is divided into seven chapters that contain the main materials on metric spaces; namely, introductory concepts, completeness, compactness, connectedness, continuous functions and metric fixed point theorems with applications. Some of the noteworthy features of this book include * Diagrammatic illustrations that encourage readers to think geometrically * Focus on systematic strategy to generate ideas for the proofs of theorems * A wealth of remarks, observations along with a variety of exercises * Historical notes and brief biographies appearing throughout the text

Data Mining with R - Learning with Case Studies, Second Edition (Paperback, 2nd edition): Luis Torgo Data Mining with R - Learning with Case Studies, Second Edition (Paperback, 2nd edition)
Luis Torgo
R1,478 Discovery Miles 14 780 Ships in 12 - 17 working days

Data Mining with R: Learning with Case Studies, Second Edition uses practical examples to illustrate the power of R and data mining. Providing an extensive update to the best-selling first edition, this new edition is divided into two parts. The first part will feature introductory material, including a new chapter that provides an introduction to data mining, to complement the already existing introduction to R. The second part includes case studies, and the new edition strongly revises the R code of the case studies making it more up-to-date with recent packages that have emerged in R. The book does not assume any prior knowledge about R. Readers who are new to R and data mining should be able to follow the case studies, and they are designed to be self-contained so the reader can start anywhere in the document. The book is accompanied by a set of freely available R source files that can be obtained at the book's web site. These files include all the code used in the case studies, and they facilitate the "do-it-yourself" approach followed in the book. Designed for users of data analysis tools, as well as researchers and developers, the book should be useful for anyone interested in entering the "world" of R and data mining. About the Author Luis Torgo is an associate professor in the Department of Computer Science at the University of Porto in Portugal. He teaches Data Mining in R in the NYU Stern School of Business' MS in Business Analytics program. An active researcher in machine learning and data mining for more than 20 years, Dr. Torgo is also a researcher in the Laboratory of Artificial Intelligence and Data Analysis (LIAAD) of INESC Porto LA.

Text Mining and Visualization - Case Studies Using Open-Source Tools (Paperback): Markus Hofmann, Andrew Chisholm Text Mining and Visualization - Case Studies Using Open-Source Tools (Paperback)
Markus Hofmann, Andrew Chisholm
R1,457 Discovery Miles 14 570 Ships in 12 - 17 working days

Text Mining and Visualization: Case Studies Using Open-Source Tools provides an introduction to text mining using some of the most popular and powerful open-source tools: KNIME, RapidMiner, Weka, R, and Python. The contributors-all highly experienced with text mining and open-source software-explain how text data are gathered and processed from a wide variety of sources, including books, server access logs, websites, social media sites, and message boards. Each chapter presents a case study that you can follow as part of a step-by-step, reproducible example. You can also easily apply and extend the techniques to other problems. All the examples are available on a supplementary website. The book shows you how to exploit your text data, offering successful application examples and blueprints for you to tackle your text mining tasks and benefit from open and freely available tools. It gets you up to date on the latest and most powerful tools, the data mining process, and specific text mining activities.

Learning with Uncertainty (Paperback): Xi-Zhao Wang, Junhai Zhai Learning with Uncertainty (Paperback)
Xi-Zhao Wang, Junhai Zhai
R1,467 Discovery Miles 14 670 Ships in 12 - 17 working days

Learning with uncertainty covers a broad range of scenarios in machine learning, this book mainly focuses on: (1) Decision tree learning with uncertainty, (2) Clustering under uncertainty environment, (3) Active learning based on uncertainty criterion, and (4) Ensemble learning in a framework of uncertainty. The book starts with the introduction to uncertainty including randomness, roughness, fuzziness and non-specificity and then comprehensively discusses a number of key issues in learning with uncertainty, such as uncertainty representation in learning, the influence of uncertainty on the performance of learning system, the heuristic design with uncertainty, etc. Most contents of the book are our research results in recent decades. The purpose of this book is to help the readers to understand the impact of uncertainty on learning processes. It comes with many examples to facilitate understanding. The book can be used as reference book or textbook for researcher fellows, senior undergraduates and postgraduates majored in computer science and technology, applied mathematics, automation, electrical engineering, etc.

Process Modeling and Management for Healthcare (Paperback): Carlo Combi, Giuseppe Pozzi, Pierangelo Veltri Process Modeling and Management for Healthcare (Paperback)
Carlo Combi, Giuseppe Pozzi, Pierangelo Veltri
R1,454 Discovery Miles 14 540 Ships in 12 - 17 working days

From the Foreword: "[This book] provides a comprehensive overview of the fundamental concepts in healthcare process management as well as some advanced topics in the cutting-edge research of the closely related areas. This book is ideal for graduate students and practitioners who want to build the foundations and develop novel contributions in healthcare process modeling and management." --Christopher Yang, Drexel University Process modeling and process management are traversal disciplines which have earned more and more relevance over the last two decades. Several research areas are involved within these disciplines, including database systems, database management, information systems, ERP, operations research, formal languages, and logic. Process Modeling and Management for Healthcare provides the reader with an in-depth analysis of what process modeling and process management techniques can do in healthcare, the major challenges faced, and those challenges remaining to be faced. The book features contributions from leading authors in the field. The book is structured into two parts. Part one covers fundamentals and basic concepts in healthcare. It explores the architecture of a process management environment, the flexibility of a process model, and the compliance of a process model. It also features a real application domain of patients suffering from age-related macular degeneration. Part two of the book includes advanced topics from the leading frontiers of scientific research on process management and healthcare. This section of the book covers software metrics to measure features of the process model as a software artifact. It includes process analysis to discover the formal properties of the process model prior to deploying it in real application domains. Abnormal situations and exceptions, as well as temporal clinical guidelines, are also presented in depth Pro.

Multivariate Analysis, Design of Experiments, and Survey Sampling (Paperback): Subir Ghosh Multivariate Analysis, Design of Experiments, and Survey Sampling (Paperback)
Subir Ghosh
R1,537 Discovery Miles 15 370 Ships in 12 - 17 working days

"Describes recent developments and surveys important topics in the areas of multivariate analysis, design of experiments, and survey sampling. Features the work of nearly 50 international leaders."

Social Networks with Rich Edge Semantics (Paperback): Quan Zheng, David Skillicorn Social Networks with Rich Edge Semantics (Paperback)
Quan Zheng, David Skillicorn
R1,461 Discovery Miles 14 610 Ships in 12 - 17 working days

Social Networks with Rich Edge Semantics introduces a new mechanism for representing social networks in which pairwise relationships can be drawn from a range of realistic possibilities, including different types of relationships, different strengths in the directions of a pair, positive and negative relationships, and relationships whose intensities change with time. For each possibility, the book shows how to model the social network using spectral embedding. It also shows how to compose the techniques so that multiple edge semantics can be modeled together, and the modeling techniques are then applied to a range of datasets. Features Introduces the reader to difficulties with current social network analysis, and the need for richer representations of relationships among nodes, including accounting for intensity, direction, type, positive/negative, and changing intensities over time Presents a novel mechanism to allow social networks with qualitatively different kinds of relationships to be described and analyzed Includes extensions to the important technique of spectral embedding, shows that they are mathematically well motivated and proves that their results are appropriate Shows how to exploit embeddings to understand structures within social networks, including subgroups, positional significance, link or edge prediction, consistency of role in different contexts, and net flow of properties through a node Illustrates the use of the approach for real-world problems for online social networks, criminal and drug smuggling networks, and networks where the nodes are themselves groups Suitable for researchers and students in social network research, data science, statistical learning, and related areas, this book will help to provide a deeper understanding of real-world social networks.

New Concepts and Trends of Hybrid Multiple Criteria Decision Making (Paperback): Gwo-Hshiung Tzeng, Kao-Yi Shen New Concepts and Trends of Hybrid Multiple Criteria Decision Making (Paperback)
Gwo-Hshiung Tzeng, Kao-Yi Shen
R1,460 Discovery Miles 14 600 Ships in 12 - 17 working days

When people or computers need to make a decision, typically multiple conflicting criteria need to be evaluated; for example, when we buy a car, we need to consider safety, cost and comfort. Multiple criteria decision making (MCDM) has been researched for decades. Now as the rising trend of big-data analytics in supporting decision making, MCDM can be more powerful when combined with state-of-the-art analytics and machine learning. In this book, the authors introduce a new framework of MCDM, which can lead to more accurate decision making. Several real-world cases will be included to illustrate the new hybrid approaches.

Handbook of Discrete-Valued Time Series - Handbooks of Modern Statistical Methods (Paperback): Richard A. Davis, Scott H Holan,... Handbook of Discrete-Valued Time Series - Handbooks of Modern Statistical Methods (Paperback)
Richard A. Davis, Scott H Holan, Robert Lund, Nalini Ravishanker
R2,195 Discovery Miles 21 950 Ships in 12 - 17 working days

Model a Wide Range of Count Time Series Handbook of Discrete-Valued Time Series presents state-of-the-art methods for modeling time series of counts and incorporates frequentist and Bayesian approaches for discrete-valued spatio-temporal data and multivariate data. While the book focuses on time series of counts, some of the techniques discussed can be applied to other types of discrete-valued time series, such as binary-valued or categorical time series. Explore a Balanced Treatment of Frequentist and Bayesian Perspectives Accessible to graduate-level students who have taken an elementary class in statistical time series analysis, the book begins with the history and current methods for modeling and analyzing univariate count series. It next discusses diagnostics and applications before proceeding to binary and categorical time series. The book then provides a guide to modern methods for discrete-valued spatio-temporal data, illustrating how far modern applications have evolved from their roots. The book ends with a focus on multivariate and long-memory count series. Get Guidance from Masters in the Field Written by a cohesive group of distinguished contributors, this handbook provides a unified account of the diverse techniques available for observation- and parameter-driven models. It covers likelihood and approximate likelihood methods, estimating equations, simulation methods, and a Bayesian approach for model fitting.

A Computational Approach to Statistical Learning (Paperback): Taylor Arnold, Michael Kane, Bryan W. Lewis A Computational Approach to Statistical Learning (Paperback)
Taylor Arnold, Michael Kane, Bryan W. Lewis
R1,491 Discovery Miles 14 910 Ships in 12 - 17 working days

A Computational Approach to Statistical Learning gives a novel introduction to predictive modeling by focusing on the algorithmic and numeric motivations behind popular statistical methods. The text contains annotated code to over 80 original reference functions. These functions provide minimal working implementations of common statistical learning algorithms. Every chapter concludes with a fully worked out application that illustrates predictive modeling tasks using a real-world dataset. The text begins with a detailed analysis of linear models and ordinary least squares. Subsequent chapters explore extensions such as ridge regression, generalized linear models, and additive models. The second half focuses on the use of general-purpose algorithms for convex optimization and their application to tasks in statistical learning. Models covered include the elastic net, dense neural networks, convolutional neural networks (CNNs), and spectral clustering. A unifying theme throughout the text is the use of optimization theory in the description of predictive models, with a particular focus on the singular value decomposition (SVD). Through this theme, the computational approach motivates and clarifies the relationships between various predictive models. Taylor Arnold is an assistant professor of statistics at the University of Richmond. His work at the intersection of computer vision, natural language processing, and digital humanities has been supported by multiple grants from the National Endowment for the Humanities (NEH) and the American Council of Learned Societies (ACLS). His first book, Humanities Data in R, was published in 2015. Michael Kane is an assistant professor of biostatistics at Yale University. He is the recipient of grants from the National Institutes of Health (NIH), DARPA, and the Bill and Melinda Gates Foundation. His R package bigmemory won the Chamber's prize for statistical software in 2010. Bryan Lewis is an applied mathematician and author of many popular R packages, including irlba, doRedis, and threejs.

Advanced Spatial Modeling with Stochastic Partial Differential Equations Using R and INLA (Paperback): Elias Krainski, Virgilio... Advanced Spatial Modeling with Stochastic Partial Differential Equations Using R and INLA (Paperback)
Elias Krainski, Virgilio Gomez-Rubio, Haakon Bakka, Amanda Lenzi, Daniela Castro-Camilo, …
R1,469 Discovery Miles 14 690 Ships in 12 - 17 working days

Modeling spatial and spatio-temporal continuous processes is an important and challenging problem in spatial statistics. Advanced Spatial Modeling with Stochastic Partial Differential Equations Using R and INLA describes in detail the stochastic partial differential equations (SPDE) approach for modeling continuous spatial processes with a Matern covariance, which has been implemented using the integrated nested Laplace approximation (INLA) in the R-INLA package. Key concepts about modeling spatial processes and the SPDE approach are explained with examples using simulated data and real applications. This book has been authored by leading experts in spatial statistics, including the main developers of the INLA and SPDE methodologies and the R-INLA package. It also includes a wide range of applications: * Spatial and spatio-temporal models for continuous outcomes * Analysis of spatial and spatio-temporal point patterns * Coregionalization spatial and spatio-temporal models * Measurement error spatial models * Modeling preferential sampling * Spatial and spatio-temporal models with physical barriers * Survival analysis with spatial effects * Dynamic space-time regression * Spatial and spatio-temporal models for extremes * Hurdle models with spatial effects * Penalized Complexity priors for spatial models All the examples in the book are fully reproducible. Further information about this book, as well as the R code and datasets used, is available from the book website at http://www.r-inla.org/spde-book. The tools described in this book will be useful to researchers in many fields such as biostatistics, spatial statistics, environmental sciences, epidemiology, ecology and others. Graduate and Ph.D. students will also find this book and associated files a valuable resource to learn INLA and the SPDE approach for spatial modeling.

Bayesian Demographic Estimation and Forecasting (Paperback): John Bryant, Junni L. Zhang Bayesian Demographic Estimation and Forecasting (Paperback)
John Bryant, Junni L. Zhang
R1,419 Discovery Miles 14 190 Ships in 12 - 17 working days

Bayesian Demographic Estimation and Forecasting presents three statistical frameworks for modern demographic estimation and forecasting. The frameworks draw on recent advances in statistical methodology to provide new tools for tackling challenges such as disaggregation, measurement error, missing data, and combining multiple data sources. The methods apply to single demographic series, or to entire demographic systems. The methods unify estimation and forecasting, and yield detailed measures of uncertainty. The book assumes minimal knowledge of statistics, and no previous knowledge of demography. The authors have developed a set of R packages implementing the methods. Data and code for all applications in the book are available on www.bdef-book.com. "This book will be welcome for the scientific community of forecasters...as it presents a new approach which has already given important results and which, in my opinion, will increase its importance in the future." ~Daniel Courgeau, Institut national d'etudes demographiques

Missing and Modified Data in Nonparametric Estimation - With R Examples (Paperback): Sam Efromovich Missing and Modified Data in Nonparametric Estimation - With R Examples (Paperback)
Sam Efromovich
R1,470 Discovery Miles 14 700 Ships in 12 - 17 working days

This book presents a systematic and unified approach for modern nonparametric treatment of missing and modified data via examples of density and hazard rate estimation, nonparametric regression, filtering signals, and time series analysis. All basic types of missing at random and not at random, biasing, truncation, censoring, and measurement errors are discussed, and their treatment is explained. Ten chapters of the book cover basic cases of direct data, biased data, nondestructive and destructive missing, survival data modified by truncation and censoring, missing survival data, stationary and nonstationary time series and processes, and ill-posed modifications. The coverage is suitable for self-study or a one-semester course for graduate students with a prerequisite of a standard course in introductory probability. Exercises of various levels of difficulty will be helpful for the instructor and self-study. The book is primarily about practically important small samples. It explains when consistent estimation is possible, and why in some cases missing data should be ignored and why others must be considered. If missing or data modification makes consistent estimation impossible, then the author explains what type of action is needed to restore the lost information. The book contains more than a hundred figures with simulated data that explain virtually every setting, claim, and development. The companion R software package allows the reader to verify, reproduce and modify every simulation and used estimators. This makes the material fully transparent and allows one to study it interactively. Sam Efromovich is the Endowed Professor of Mathematical Sciences and the Head of the Actuarial Program at the University of Texas at Dallas. He is well known for his work on the theory and application of nonparametric curve estimation and is the author of Nonparametric Curve Estimation: Methods, Theory, and Applications. Professor Sam Efromovich is a Fellow of the Institute of Mathematical Statistics and the American Statistical Association.

Linear Models and the Relevant Distributions and Matrix Algebra (Paperback): David A Harville Linear Models and the Relevant Distributions and Matrix Algebra (Paperback)
David A Harville
R1,336 Discovery Miles 13 360 Ships in 12 - 17 working days

Linear Models and the Relevant Distributions and Matrix Algebra provides in-depth and detailed coverage of the use of linear statistical models as a basis for parametric and predictive inference. It can be a valuable reference, a primary or secondary text in a graduate-level course on linear models, or a resource used (in a course on mathematical statistics) to illustrate various theoretical concepts in the context of a relatively complex setting of great practical importance. Features: Provides coverage of matrix algebra that is extensive and relatively self-contained and does so in a meaningful context Provides thorough coverage of the relevant statistical distributions, including spherically and elliptically symmetric distributions Includes extensive coverage of multiple-comparison procedures (and of simultaneous confidence intervals), including procedures for controlling the k-FWER and the FDR Provides thorough coverage (complete with detailed and highly accessible proofs) of results on the properties of various linear-model procedures, including those of least squares estimators and those of the F test. Features the use of real data sets for illustrative purposes Includes many exercises David Harville served for 10 years as a mathematical statistician in the Applied Mathematics Research Laboratory of the Aerospace Research Laboratories at Wright-Patterson AFB, Ohio, 20 years as a full professor in Iowa State University's Department of Statistics where he now has emeritus status, and seven years as a research staff member of the Mathematical Sciences Department of IBM's T.J. Watson Research Center. He has considerable relevant experience, having taught M.S. and Ph.D. level courses in linear models, been the thesis advisor of 10 Ph.D. graduates, and authored or co-authored two books and more than 80 research articles. His work has been recognized through his election as a Fellow of the American Statistical Association and of the Institute of Mathematical Statistics and as a member of the International Statistical Institute.

Bayesian Regression Modeling with INLA (Paperback): Xiaofeng Wang, Yu Ryan Yue, Julian J. Faraway Bayesian Regression Modeling with INLA (Paperback)
Xiaofeng Wang, Yu Ryan Yue, Julian J. Faraway
R1,483 Discovery Miles 14 830 Ships in 12 - 17 working days

INLA stands for Integrated Nested Laplace Approximations, which is a new method for fitting a broad class of Bayesian regression models. No samples of the posterior marginal distributions need to be drawn using INLA, so it is a computationally convenient alternative to Markov chain Monte Carlo (MCMC), the standard tool for Bayesian inference. Bayesian Regression Modeling with INLA covers a wide range of modern regression models and focuses on the INLA technique for building Bayesian models using real-world data and assessing their validity. A key theme throughout the book is that it makes sense to demonstrate the interplay of theory and practice with reproducible studies. Complete R commands are provided for each example, and a supporting website holds all of the data described in the book. An R package including the data and additional functions in the book is available to download. The book is aimed at readers who have a basic knowledge of statistical theory and Bayesian methodology. It gets readers up to date on the latest in Bayesian inference using INLA and prepares them for sophisticated, real-world work. Xiaofeng Wang is Professor of Medicine and Biostatistics at the Cleveland Clinic Lerner College of Medicine of Case Western Reserve University and a Full Staff in the Department of Quantitative Health Sciences at Cleveland Clinic. Yu Ryan Yue is Associate Professor of Statistics in the Paul H. Chook Department of Information Systems and Statistics at Baruch College, The City University of New York. Julian J. Faraway is Professor of Statistics in the Department of Mathematical Sciences at the University of Bath.

Bioassays with Arthropods (Paperback, 3rd edition): Jacqueline L. Robertson, Efren Olguin, Brad Alberts, Moneen Marie Jones Bioassays with Arthropods (Paperback, 3rd edition)
Jacqueline L. Robertson, Efren Olguin, Brad Alberts, Moneen Marie Jones
R1,587 Discovery Miles 15 870 Ships in 12 - 17 working days

Imagine a statistics book for bioassays written by a statistician. Next, imagine a statistics book for bioassays written for a layman. Bioassays with Arthropods, Third Edition offers the best of both worlds by translating the terse, precise language of the statistician into language used by the laboratory scientist. The book explains the statistical basis and analysis for each kind of quantal response bioassay in just the right amount of detail. The first two editions were a great reference for designing, conducting, and interpreting bioassays: this completely revised and updated third edition will also train the laboratory scientist to be an expert in estimation of dose response curves. New in the Third Edition: Introduces four new Windows and Apple-based computer programs (PoloJR, OptiDose, PoloMixture and PoloMulti) for the analyses of binary and multiple response analyses, respectively Replaces out-of-date GLIM examples with R program samples Includes a new chapter, Population Toxicology, and takes a systems approach to bioassays Expands the coverage of invasive species and quarantine statistics Building on the foundation set by the much-cited first two editions, the authors clearly delineate applications and ideas that are exceptionally challenging for those not already familiar with their use. They lead you through the methods with such ease and organization, that you suddenly find yourself readily able to apply concepts that you never thought you would understand. To order the PoloSuite computer software described in Bioassays with Arthropods, Third Edition, use the order form found at www.leora-software.com or contact the LeOra Software Company at [email protected].

Basic Experimental Strategies and Data Analysis for Science and Engineering (Paperback): John Lawson, John Erjavec Basic Experimental Strategies and Data Analysis for Science and Engineering (Paperback)
John Lawson, John Erjavec
R1,496 Discovery Miles 14 960 Ships in 12 - 17 working days

Every technical investigation involving trial-and-error experimentation embodies a strategy for deciding what experiments to perform, when to quit, and how to interpret the data. This handbook presents several statistically derived strategies which are more efficient than any intuitive approach and will get the investigator to their goal with the fewest experiments, give the greatest degree of reliability to their conclusions, and keep the risk of overlooking something of practical importance to a minimum. Features: Provides a comprehensive desk reference on experimental design that will be useful to practitioners without extensive statistical knowledge Features a review of the necessary statistical prerequisites Presents a set of tables that allow readers to quickly access various experimental designs Includes a roadmap for where and when to use various experimental design strategies Shows compelling examples of each method discussed Illustrates how to reproduce results using several popular software packages on a supplementary website Following the outlines and examples in this book should quickly allow a working professional or student to select the appropriate experimental design for a research problem at hand, follow the design to conduct the experiments, and analyze and interpret the resulting data. John Lawson and John Erjavec have a combined 25 years of industrial experience and over 40 years of academic experience. They have taught this material to numerous practicing engineers and scientists as well as undergraduate and graduate students.

Essentials of a Successful Biostatistical Collaboration (Paperback): Arul Earnest Essentials of a Successful Biostatistical Collaboration (Paperback)
Arul Earnest
R1,421 Discovery Miles 14 210 Ships in 12 - 17 working days

The aim of this book is to equip biostatisticians and other quantitative scientists with the necessary skills, knowledge, and habits to collaborate effectively with clinicians in the healthcare field. The book provides valuable insight on where to look for information and material on sample size and statistical techniques commonly used in clinical research, and on how best to communicate with clinicians. It also covers the best practices to adopt in terms of project, time, and data management; relationship with collaborators; etc.

Sample Size Calculations for Clustered and Longitudinal Outcomes in Clinical Research (Paperback): Chul Ahn, Moonseoung Heo,... Sample Size Calculations for Clustered and Longitudinal Outcomes in Clinical Research (Paperback)
Chul Ahn, Moonseoung Heo, Song Zhang
R1,474 Discovery Miles 14 740 Ships in 12 - 17 working days

Accurate sample size calculation ensures that clinical studies have adequate power to detect clinically meaningful effects. This results in the efficient use of resources and avoids exposing a disproportionate number of patients to experimental treatments caused by an overpowered study. Sample Size Calculations for Clustered and Longitudinal Outcomes in Clinical Research explains how to determine sample size for studies with correlated outcomes, which are widely implemented in medical, epidemiological, and behavioral studies. The book focuses on issues specific to the two types of correlated outcomes: longitudinal and clustered. For clustered studies, the authors provide sample size formulas that accommodate variable cluster sizes and within-cluster correlation. For longitudinal studies, they present sample size formulas to account for within-subject correlation among repeated measurements and various missing data patterns. For multiple levels of clustering, the level at which to perform randomization actually becomes a design parameter. The authors show how this can greatly impact trial administration, analysis, and sample size requirement. Addressing the overarching theme of sample size determination for correlated outcomes, this book provides a useful resource for biostatisticians, clinical investigators, epidemiologists, and social scientists whose research involves trials with correlated outcomes. Each chapter is self-contained so readers can explore topics relevant to their research projects without having to refer to other chapters.

Introduction to Multivariate Analysis - Linear and Nonlinear Modeling (Paperback): Sadanori Konishi Introduction to Multivariate Analysis - Linear and Nonlinear Modeling (Paperback)
Sadanori Konishi
R1,486 Discovery Miles 14 860 Ships in 12 - 17 working days

Select the Optimal Model for Interpreting Multivariate Data Introduction to Multivariate Analysis: Linear and Nonlinear Modeling shows how multivariate analysis is widely used for extracting useful information and patterns from multivariate data and for understanding the structure of random phenomena. Along with the basic concepts of various procedures in traditional multivariate analysis, the book covers nonlinear techniques for clarifying phenomena behind observed multivariate data. It primarily focuses on regression modeling, classification and discrimination, dimension reduction, and clustering. The text thoroughly explains the concepts and derivations of the AIC, BIC, and related criteria and includes a wide range of practical examples of model selection and evaluation criteria. To estimate and evaluate models with a large number of predictor variables, the author presents regularization methods, including the L1 norm regularization that gives simultaneous model estimation and variable selection. For advanced undergraduate and graduate students in statistical science, this text provides a systematic description of both traditional and newer techniques in multivariate analysis and machine learning. It also introduces linear and nonlinear statistical modeling for researchers and practitioners in industrial and systems engineering, information science, life science, and other areas.

Financial and Actuarial Statistics - An Introduction, Second Edition (Paperback, 2nd edition): Dale S. Borowiak, Arnold F.... Financial and Actuarial Statistics - An Introduction, Second Edition (Paperback, 2nd edition)
Dale S. Borowiak, Arnold F. Shapiro
R1,499 Discovery Miles 14 990 Ships in 12 - 17 working days

Understand Up-to-Date Statistical Techniques for Financial and Actuarial Applications Since the first edition was published, statistical techniques, such as reliability measurement, simulation, regression, and Markov chain modeling, have become more prominent in the financial and actuarial industries. Consequently, practitioners and students must acquire strong mathematical and statistical backgrounds in order to have successful careers. Financial and Actuarial Statistics: An Introduction, Second Edition enables readers to obtain the necessary mathematical and statistical background. It also advances the application and theory of statistics in modern financial and actuarial modeling. Like its predecessor, this second edition considers financial and actuarial modeling from a statistical point of view while adding a substantial amount of new material. New to the Second Edition Nomenclature and notations standard to the actuarial field Excel exercises with solutions, which demonstrate how to use Excel functions for statistical and actuarial computations Problems dealing with standard probability and statistics theory, along with detailed equation links A chapter on Markov chains and actuarial applications Expanded discussions of simulation techniques and applications, such as investment pricing Sections on the maximum likelihood approach to parameter estimation as well as asymptotic applications Discussions of diagnostic procedures for nonnegative random variables and Pareto, lognormal, Weibull, and left truncated distributions Expanded material on surplus models and ruin computations Discussions of nonparametric prediction intervals, option pricing diagnostics, variance of the loss function associated with standard actuarial models, and Gompertz and Makeham distributions Sections on the concept of actuarial statistics for a collection of stochastic status models The book presents a unified approach to both financial and actuarial modeling through the use of general status structures. The authors define future time-dependent financial actions in terms of a status structure that may be either deterministic or stochastic. They show how deterministic status structures lead to classical interest and annuity models, investment pricing models, and aggregate claim models. They also employ stochastic status structures to develop financial and actuarial models, such as surplus models, life insurance, and life annuity models.

Confidence Intervals for Proportions and Related Measures of Effect Size (Paperback): Robert Gordon Newcombe Confidence Intervals for Proportions and Related Measures of Effect Size (Paperback)
Robert Gordon Newcombe
R1,505 Discovery Miles 15 050 Ships in 12 - 17 working days

Confidence Intervals for Proportions and Related Measures of Effect Size illustrates the use of effect size measures and corresponding confidence intervals as more informative alternatives to the most basic and widely used significance tests. The book provides you with a deep understanding of what happens when these statistical methods are applied in situations far removed from the familiar Gaussian case. Drawing on his extensive work as a statistician and professor at Cardiff University School of Medicine, the author brings together methods for calculating confidence intervals for proportions and several other important measures, including differences, ratios, and nonparametric effect size measures generalizing Mann-Whitney and Wilcoxon tests. He also explains three important approaches to obtaining intervals for related measures. Many examples illustrate the application of the methods in the health and social sciences. Requiring little computational skills, the book offers user-friendly Excel spreadsheets for download at www.crcpress.com, enabling you to easily apply the methods to your own empirical data.

Measures of Interobserver Agreement and Reliability (Paperback, 2nd edition): Mohamed M. Shoukri Measures of Interobserver Agreement and Reliability (Paperback, 2nd edition)
Mohamed M. Shoukri
R1,479 Discovery Miles 14 790 Ships in 12 - 17 working days

Measures of Interobserver Agreement and Reliability, Second Edition covers important issues related to the design and analysis of reliability and agreement studies. It examines factors affecting the degree of measurement errors in reliability generalization studies and characteristics influencing the process of diagnosing each subject in a reliability study. The book also illustrates the importance of blinding and random selection of subjects. New to the Second Edition New chapter that describes various models for methods comparison studies New chapter on the analysis of reproducibility using the within-subjects coefficient of variation Emphasis on the definition of the subjects' and raters' population as well as sample size determination This edition continues to offer guidance on how to run sound reliability and agreement studies in clinical settings and other types of investigations. The author explores two ways of producing one pooled estimate of agreement from several centers: a fixed-effect approach and a random sample of centers using a simple meta-analytic approach. The text includes end-of-chapter exercises as well as downloadable resources of data sets and SAS code.

Circular and Linear Regression - Fitting Circles and Lines by Least Squares (Paperback): Nikolai Chernov Circular and Linear Regression - Fitting Circles and Lines by Least Squares (Paperback)
Nikolai Chernov
R1,478 Discovery Miles 14 780 Ships in 12 - 17 working days

Find the right algorithm for your image processing application Exploring the recent achievements that have occurred since the mid-1990s, Circular and Linear Regression: Fitting Circles and Lines by Least Squares explains how to use modern algorithms to fit geometric contours (circles and circular arcs) to observed data in image processing and computer vision. The author covers all facets-geometric, statistical, and computational-of the methods. He looks at how the numerical algorithms relate to one another through underlying ideas, compares the strengths and weaknesses of each algorithm, and illustrates how to combine the algorithms to achieve the best performance. After introducing errors-in-variables (EIV) regression analysis and its history, the book summarizes the solution of the linear EIV problem and highlights its main geometric and statistical properties. It next describes the theory of fitting circles by least squares, before focusing on practical geometric and algebraic circle fitting methods. The text then covers the statistical analysis of curve and circle fitting methods. The last chapter presents a sample of "exotic" circle fits, including some mathematically sophisticated procedures that use complex numbers and conformal mappings of the complex plane. Essential for understanding the advantages and limitations of the practical schemes, this book thoroughly addresses the theoretical aspects of the fitting problem. It also identifies obscure issues that may be relevant in future research.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Managerial Statistics, International…
Gerald Keller Paperback R1,608 R1,382 Discovery Miles 13 820
Statistics for Management and Economics
Gerald Keller, Nicoleta Gaciu Paperback R1,253 R1,090 Discovery Miles 10 900
Quantitative statistical techniques
A. Swanepoel, F.L. Vivier, … Paperback R635 R563 Discovery Miles 5 630
Pass Cambridge BEC Preliminary BRE
CD-ROM R1,215 Discovery Miles 12 150
Introductory Statistics Achieve access…
Stephen Kokoska Mixed media product R2,433 Discovery Miles 24 330
Introduction to the Practice of…
David S Moore, George P. McCabe, … Paperback R2,359 Discovery Miles 23 590
Applied Business Statistics - Methods…
Trevor Wegner Paperback R759 R616 Discovery Miles 6 160
Statistics For Business And Economics
David Anderson, James Cochran, … Paperback  (1)
R2,109 Discovery Miles 21 090
Basic mathematics for economics students…
Derek Yu Paperback R345 R306 Discovery Miles 3 060
Rationality - What It Is, Why It Seems…
Steven Pinker Paperback R380 R297 Discovery Miles 2 970

 

Partners