0
Your cart

Your cart is empty

Browse All Departments
Price
  • R50 - R100 (2)
  • R100 - R250 (52)
  • R250 - R500 (352)
  • R500+ (13,830)
  • -
Status
Format
Author / Contributor
Publisher

Books > Science & Mathematics > Mathematics > Probability & statistics

Time Series Modeling of Neuroscience Data (Hardcover, New): Tohru Ozaki Time Series Modeling of Neuroscience Data (Hardcover, New)
Tohru Ozaki
R4,793 Discovery Miles 47 930 Ships in 12 - 17 working days

Recent advances in brain science measurement technology have given researchers access to very large-scale time series data such as EEG/MEG data (20 to 100 dimensional) and fMRI (140,000 dimensional) data. To analyze such massive data, efficient computational and statistical methods are required. Time Series Modeling of Neuroscience Data shows how to efficiently analyze neuroscience data by the Wiener-Kalman-Akaike approach, in which dynamic models of all kinds, such as linear/nonlinear differential equation models and time series models, are used for whitening the temporally dependent time series in the framework of linear/nonlinear state space models. Using as little mathematics as possible, this book explores some of its basic concepts and their derivatives as useful tools for time series analysis. Unique features include: A statistical identification method of highly nonlinear dynamical systems such as the Hodgkin-Huxley model, Lorenz chaos model, Zetterberg Model, and more Methods and applications for Dynamic Causality Analysis developed by Wiener, Granger, and Akaike A state space modeling method for dynamicization of solutions for the Inverse Problems A heteroscedastic state space modeling method for dynamic non-stationary signal decomposition for applications to signal detection problems in EEG data analysis An innovation-based method for the characterization of nonlinear and/or non-Gaussian time series An innovation-based method for spatial time series modeling for fMRI data analysis The main point of interest in this book is to show that the same data can be treated using both a dynamical system and time series approach so that the neural and physiological information can be extracted more efficiently. Of course, time series modeling is valid not only in neuroscience data analysis but also in many other sciences and engineering fields where the statistical inference from the observed time series data plays an important role.

An Objective Theory of Probability (Routledge Revivals) (Paperback): Donald Gillies An Objective Theory of Probability (Routledge Revivals) (Paperback)
Donald Gillies
R1,713 Discovery Miles 17 130 Ships in 12 - 17 working days

This reissue of D. A. Gillies highly influential work, first published in 1973, is a philosophical theory of probability which seeks to develop von Mises' views on the subject. In agreement with von Mises, the author regards probability theory as a mathematical science like mechanics or electrodynamics, and probability as an objective, measurable concept like force, mass or charge. On the other hand, Dr Gillies rejects von Mises' definition of probability in terms of limiting frequency and claims that probability should be taken as a primitive or undefined term in accordance with modern axiomatic approaches. This of course raises the problem of how the abstract calculus of probability should be connected with the 'actual world of experiments'. It is suggested that this link should be established, not by a definition of probability, but by an application of Popper's concept of falsifiability. In addition to formulating his own interesting theory, Dr Gillies gives a detailed criticism of the generally accepted Neyman Pearson theory of testing, as well as of alternative philosophical approaches to probability theory. The reissue will be of interest both to philosophers with no previous knowledge of probability theory and to mathematicians interested in the foundations of probability theory and statistics.

Selected Topics On Continuous-time Controlled Markov Chains And Markov Games (Hardcover): Onesimo Hernandez-Lerma, Tomas... Selected Topics On Continuous-time Controlled Markov Chains And Markov Games (Hardcover)
Onesimo Hernandez-Lerma, Tomas Prieto-Rumeau
R2,826 Discovery Miles 28 260 Ships in 12 - 17 working days

This book concerns continuous-time controlled Markov chains, also known as continuous-time Markov decision processes. They form a class of stochastic control problems in which a single decision-maker wishes to optimize a given objective function. This book is also concerned with Markov games, where two decision-makers (or players) try to optimize their own objective function. Both decision-making processes appear in a large number of applications in economics, operations research, engineering, and computer science, among other areas.An extensive, self-contained, up-to-date analysis of basic optimality criteria (such as discounted and average reward), and advanced optimality criteria (e.g., bias, overtaking, sensitive discount, and Blackwell optimality) is presented. A particular emphasis is made on the application of the results herein: algorithmic and computational issues are discussed, and applications to population models and epidemic processes are shown.This book is addressed to students and researchers in the fields of stochastic control and stochastic games. Moreover, it could be of interest also to undergraduate and beginning graduate students because the reader is not supposed to have a high mathematical background: a working knowledge of calculus, linear algebra, probability, and continuous-time Markov chains should suffice to understand the contents of the book.

Machine Learning - A Practical Approach on the Statistical Learning Theory (Hardcover, 1st ed. 2018): Rodrigo F Mello, Moacir... Machine Learning - A Practical Approach on the Statistical Learning Theory (Hardcover, 1st ed. 2018)
Rodrigo F Mello, Moacir Antonelli Ponti
R2,618 Discovery Miles 26 180 Ships in 12 - 17 working days

This book presents the Statistical Learning Theory in a detailed and easy to understand way, by using practical examples, algorithms and source codes. It can be used as a textbook in graduation or undergraduation courses, for self-learners, or as reference with respect to the main theoretical concepts of Machine Learning. Fundamental concepts of Linear Algebra and Optimization applied to Machine Learning are provided, as well as source codes in R, making the book as self-contained as possible. It starts with an introduction to Machine Learning concepts and algorithms such as the Perceptron, Multilayer Perceptron and the Distance-Weighted Nearest Neighbors with examples, in order to provide the necessary foundation so the reader is able to understand the Bias-Variance Dilemma, which is the central point of the Statistical Learning Theory. Afterwards, we introduce all assumptions and formalize the Statistical Learning Theory, allowing the practical study of different classification algorithms. Then, we proceed with concentration inequalities until arriving to the Generalization and the Large-Margin bounds, providing the main motivations for the Support Vector Machines. From that, we introduce all necessary optimization concepts related to the implementation of Support Vector Machines. To provide a next stage of development, the book finishes with a discussion on SVM kernels as a way and motivation to study data spaces and improve classification results.

Statistical Methods in Psychiatry and Related Fields - Longitudinal, Clustered, and Other Repeated Measures Data (Paperback):... Statistical Methods in Psychiatry and Related Fields - Longitudinal, Clustered, and Other Repeated Measures Data (Paperback)
Ralitza Gueorguieva
R1,551 Discovery Miles 15 510 Ships in 12 - 17 working days

Data collected in psychiatry and related fields are complex because outcomes are rarely directly observed, there are multiple correlated repeated measures within individuals, there is natural heterogeneity in treatment responses and in other characteristics in the populations. Simple statistical methods do not work well with such data. More advanced statistical methods capture the data complexity better, but are difficult to apply appropriately and correctly by investigators who do not have advanced training in statistics. This book presents, at a non-technical level, several approaches for the analysis of correlated data: mixed models for continuous and categorical outcomes, nonparametric methods for repeated measures and growth mixture models for heterogeneous trajectories over time. Separate chapters are devoted to techniques for multiple comparison correction, analysis in the presence of missing data, adjustment for covariates, assessment of mediator and moderator effects, study design and sample size considerations. The focus is on the assumptions of each method, applicability and interpretation rather than on technical details. Features Provides an overview of intermediate to advanced statistical methods applied to psychiatry. Takes a non-technical approach with mathematical details kept to a minimum. Includes lots of detailed examples from published studies in psychiatry and related fields. Software programs, data sets and output are available on a supplementary website. The intended audience are applied researchers with minimal knowledge of statistics, although the book could also benefit collaborating statisticians. The book, together with the online materials, is a valuable resource aimed at promoting the use of appropriate statistical methods for the analysis of repeated measures data. Ralitza Gueorguieva is a Senior Research Scientist at the Department of Biostatistics, Yale School of Public Health. She has more than 20 years experience in statistical methodology development and collaborations with psychiatrists and other researchers, and is the author of over 130 peer-reviewed publications.

Rational Queueing (Paperback): Refael Hassin Rational Queueing (Paperback)
Refael Hassin
R1,554 Discovery Miles 15 540 Ships in 12 - 17 working days

Understand the Strategic Behavior in Queueing Systems Rational Queueing provides one of the first unified accounts of the dynamic aspects involved in the strategic behavior in queues. It explores the performance of queueing systems where multiple agents, such as customers, servers, and central managers, all act but often in a noncooperative manner. The book first addresses observable queues and models that assume state-dependent behavior. It then discusses other types of information in queueing systems and compares observable and unobservable variations and incentives for information disclosure. The next several chapters present relevant models for the maximization of individual utilities, social welfare, and profits. After covering queueing networks, from simple parallel servers to general network structures, the author describes models for planned vacations and forced vacations (such as breakdowns). Focusing on supply chain models, he then shows how agents of these models may have different goals yet they all profit when the system operates efficiently. The final chapter allows bounded rationality by lowering the assumption of fully rational agents.

A Course in Categorical Data Analysis (Hardcover): Thomas Leonard A Course in Categorical Data Analysis (Hardcover)
Thomas Leonard
R5,337 Discovery Miles 53 370 Ships in 12 - 17 working days

Categorical data-comprising counts of individuals, objects, or entities in different categories-emerge frequently from many areas of study, including medicine, sociology, geology, and education. They provide important statistical information that can lead to real-life conclusions and the discovery of fresh knowledge. Therefore, the ability to manipulate, understand, and interpret categorical data becomes of interest-if not essential-to professionals and students in a broad range of disciplines. Although t-tests, linear regression, and analysis of variance are useful, valid methods for analysis of measurement data, categorical data requires a different methodology and techniques typically not encountered in introductory statistics courses. Developed from long experience in teaching categorical analysis to a multidisciplinary mix of undergraduate and graduate students, A Course in Categorical Data Analysis presents the easiest, most straightforward ways of extracting real-life conclusions from contingency tables. The author uses a Fisherian approach to categorical data analysis and incorporates numerous examples and real data sets. Although he offers S-PLUS routines through the Internet, readers do not need full knowledge of a statistical software package. In this unique text, the author chooses methods and an approach that nurtures intuitive thinking. He trains his readers to focus not on finding a model that fits the data, but on using different models that may lead to meaningful conclusions. The book offers some simple, innovative techniques not highighted in other texts that help make the book accessible to a broad, interdisciplinary audience. A Course in Categorical Data Analysis enables readers to quickly use its offering of tools for drawing scientific, medical, or real-life conclusions from categorical data sets.

Data Analysis Using Hierarchical Generalized Linear Models with R (Paperback): Youngjo Lee, Maengseok Noh, Lars Ronnegard Data Analysis Using Hierarchical Generalized Linear Models with R (Paperback)
Youngjo Lee, Maengseok Noh, Lars Ronnegard
R1,488 Discovery Miles 14 880 Ships in 12 - 17 working days

Since their introduction, hierarchical generalized linear models (HGLMs) have proven useful in various fields by allowing random effects in regression models. Interest in the topic has grown, and various practical analytical tools have been developed. This book summarizes developments within the field and, using data examples, illustrates how to analyse various kinds of data using R. It provides a likelihood approach to advanced statistical modelling including generalized linear models with random effects, survival analysis and frailty models, multivariate HGLMs, factor and structural equation models, robust modelling of random effects, models including penalty and variable selection and hypothesis testing. This example-driven book is aimed primarily at researchers and graduate students, who wish to perform data modelling beyond the frequentist framework, and especially for those searching for a bridge between Bayesian and frequentist statistics.

Designing General Linear Models to Test Research Hypotheses (Paperback): Keith McNeil, Isadore Newman, John W. Fraas Designing General Linear Models to Test Research Hypotheses (Paperback)
Keith McNeil, Isadore Newman, John W. Fraas
R1,614 Discovery Miles 16 140 Ships in 12 - 17 working days

The focus of this text is placed on designing General Linear Models (regression models) to test research hypotheses. The authors illustrate and discuss General Linear Models specifically designed to statistically test research hypotheses that deal with the differences among group means, relationships between continuous variables, analysis of covariance, interaction effects, nonlinear relationships, and repeated measures. Many of the chapters contain sections entitled "General Hypothesis" and "Applied Hypothesis." The General Hypothesis sections are designed to provide the readers with "road maps" regarding how to conduct the various analyses presented in the text. The Applied Hypothesis sections illustrate how the various analyses are conducted with Microsoft Excel and SPSS for Windows and how the outputs should be interpreted to test the hypotheses. Throughout the text, the authors stress the importance of designing regression models that precisely reflect the null and research hypotheses.

Applied Statistics - Handbook of GENSTAT Analysis (Paperback): E. J Snell, H. Simpson Applied Statistics - Handbook of GENSTAT Analysis (Paperback)
E. J Snell, H. Simpson
R1,278 Discovery Miles 12 780 Ships in 12 - 17 working days

GENSTAT is a general purpose statistical computing system with a flexible command language operating on a variety of data structures. It may be used on a number of computer ranges, either interactively for exploratory data analysis, or in batch mode for standard data analysis. The great flexibility of GENSTAT is demonstrated in this handbook by analysing the wide range of examples discussed in Applied Statistics - Principles and Examples (Cox and Snell, 1981). GENSTAT programs are listed for each of the examples. Most of the data sets are small but often it is these seemingly small problems which involve the most tricky statistical and computational procedures. This handbook is self-contained although for a full description of the analysis and interpretation it should be used in parallel with Applied Statistics - Principles and Examples.

Using R for Modelling and Quantitative Methods in Fisheries (Hardcover): Malcolm Haddon Using R for Modelling and Quantitative Methods in Fisheries (Hardcover)
Malcolm Haddon
R5,358 Discovery Miles 53 580 Ships in 12 - 17 working days

Using R for Modelling and Quantitative Methods in Fisheries has evolved and been adapted from an earlier book by the same author and provides a detailed introduction to analytical methods commonly used by fishery scientists, ecologists, and advanced students using the open-source software R as a programming tool. Some knowledge of R is assumed, as this is a book about using R, but an introduction to the development and working of functions, and how one can explore the contents of R functions and packages, is provided. The example analyses proceed step-by-step using code listed in the book and from the book's companion R package, MQMF, available from GitHub and the standard archive, CRAN. The examples are designed to be simple to modify so the reader can quickly adapt the methods described to use with their own data. A primary aim of the book is to be a useful resource to natural resource practitioners and students. Featured Chapters: Model Parameter Estimation provides a detailed explanation of the requirements and steps involved in fitting models to data, using R and, mainly, maximum likelihood methods. On Uncertainty uses R to implement bootstrapping, likelihood profiles, asymptotic errors, and Bayesian posteriors to characterize any uncertainty in an analysis. The use of the Monte Carlo Markov Chain methodology is examined in some detail. Surplus Production Models applies all the methods examined in the earlier parts of the book to conducting a stock assessment. This included fitting alternative models to the available data, characterizing the uncertainty in different ways, and projecting the optimum models forward in time as the basis for providing useful management advice.

Absolute Risk - Methods and Applications in Clinical Management and Public Health (Paperback): Ruth M. Pfeiffer, Mitchell H.... Absolute Risk - Methods and Applications in Clinical Management and Public Health (Paperback)
Ruth M. Pfeiffer, Mitchell H. Gail
R1,529 Discovery Miles 15 290 Ships in 12 - 17 working days

Absolute Risk: Methods and Applications in Clinical Management and Public Health provides theory and examples to demonstrate the importance of absolute risk in counseling patients, devising public health strategies, and clinical management. The book provides sufficient technical detail to allow statisticians, epidemiologists, and clinicians to build, test, and apply models of absolute risk. Features: Provides theoretical basis for modeling absolute risk, including competing risks and cause-specific and cumulative incidence regression Discusses various sampling designs for estimating absolute risk and criteria to evaluate models Provides details on statistical inference for the various sampling designs Discusses criteria for evaluating risk models and comparing risk models, including both general criteria and problem-specific expected losses in well-defined clinical and public health applications Describes many applications encompassing both disease prevention and prognosis, and ranging from counseling individual patients, to clinical decision making, to assessing the impact of risk-based public health strategies Discusses model updating, family-based designs, dynamic projections, and other topics Ruth M. Pfeiffer is a mathematical statistician and Fellow of the American Statistical Association, with interests in risk modeling, dimension reduction, and applications in epidemiology. She developed absolute risk models for breast cancer, colon cancer, melanoma, and second primary thyroid cancer following a childhood cancer diagnosis. Mitchell H. Gail developed the widely used "Gail model" for projecting the absolute risk of invasive breast cancer. He is a medical statistician with interests in statistical methods and applications in epidemiology and molecular medicine. He is a member of the National Academy of Medicine and former President of the American Statistical Association. Both are Senior Investigators in the Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health.

Omic Association Studies with R and Bioconductor (Paperback): Juan R Gonzalez, Alejandro Caceres Omic Association Studies with R and Bioconductor (Paperback)
Juan R Gonzalez, Alejandro Caceres
R1,493 Discovery Miles 14 930 Ships in 12 - 17 working days

After the great expansion of genome-wide association studies, their scientific methodology and, notably, their data analysis has matured in recent years, and they are a keystone in large epidemiological studies. Newcomers to the field are confronted with a wealth of data, resources and methods. This book presents current methods to perform informative analyses using real and illustrative data with established bioinformatics tools and guides the reader through the use of publicly available data. Includes clear, readable programming codes for readers to reproduce and adapt to their own data. Emphasises extracting biologically meaningful associations between traits of interest and genomic, transcriptomic and epigenomic data Uses up-to-date methods to exploit omic data Presents methods through specific examples and computing sessions Supplemented by a website, including code, datasets, and solutions

Statistical Methods for Survival Trial Design - With Applications to Cancer Clinical Trials Using R (Paperback): Jianrong Wu Statistical Methods for Survival Trial Design - With Applications to Cancer Clinical Trials Using R (Paperback)
Jianrong Wu
R1,476 Discovery Miles 14 760 Ships in 12 - 17 working days

Statistical Methods for Survival Trial Design: With Applications to Cancer Clinical Trials Using R provides a thorough presentation of the principles of designing and monitoring cancer clinical trials in which time-to-event is the primary endpoint. Traditional cancer trial designs with time-to-event endpoints are often limited to the exponential model or proportional hazards model. In practice, however, those model assumptions may not be satisfied for long-term survival trials. This book is the first to cover comprehensively the many newly developed methodologies for survival trial design, including trial design under the Weibull survival models; extensions of the sample size calculations under the proportional hazard models; and trial design under mixture cure models, complex survival models, Cox regression models, and competing-risk models. A general sequential procedure based on the sequential conditional probability ratio test is also implemented for survival trial monitoring. All methodologies are presented with sufficient detail for interested researchers or graduate students.

Nonlinear Lp-Norm Estimation (Paperback): Rene Gonin Nonlinear Lp-Norm Estimation (Paperback)
Rene Gonin
R1,871 Discovery Miles 18 710 Ships in 12 - 17 working days

Complete with valuable FORTRAN programs that help solve nondifferentiable nonlinear LtandLo.-norm estimation problems, this important reference/text extensively delineates ahistory of Lp-norm estimation. It examines the nonlinear Lp-norm estimation problem that isa viable alternative to least squares estimation problems where the underlying errordistribution is nonnormal, i.e., non-Gaussian.Nonlinear LrNorm Estimation addresses both computational and statistical aspects ofLp-norm estimation problems to bridge the gap between these two fields . . . contains 70useful illustrations ... discusses linear Lp-norm as well as nonlinear Lt, Lo., and Lp-normestimation problems . . . provides all appropriate computational algorithms and FORTRANlistings for nonlinear Lt- and Lo.-norm estimation problems . . . guides readers with clear endof-chapter notes on related topics and outstanding research publications . . . contains numericalexamples plus several practical problems .. . and shows how the data can prescribe variousapplications of Lp-norm alternatives.Nonlinear Lp-Norm Estimation is an indispensable reference for statisticians,operations researchers, numerical analysts, applied mathematicians, biometricians, andcomputer scientists, as well as a text for graduate students in statistics or computer science.

Analyzing Longitudinal Clinical Trial Data - A Practical Guide (Paperback): Craig Mallinckrodt, Ilya Lipkovich Analyzing Longitudinal Clinical Trial Data - A Practical Guide (Paperback)
Craig Mallinckrodt, Ilya Lipkovich
R1,484 Discovery Miles 14 840 Ships in 12 - 17 working days

Analyzing Longitudinal Clinical Trial Data: A Practical Guide provides practical and easy to implement approaches for bringing the latest theory on analysis of longitudinal clinical trial data into routine practice.The book, with its example-oriented approach that includes numerous SAS and R code fragments, is an essential resource for statisticians and graduate students specializing in medical research. The authors provide clear descriptions of the relevant statistical theory and illustrate practical considerations for modeling longitudinal data. Topics covered include choice of endpoint and statistical test; modeling means and the correlations between repeated measurements; accounting for covariates; modeling categorical data; model verification; methods for incomplete (missing) data that includes the latest developments in sensitivity analyses, along with approaches for and issues in choosing estimands; and means for preventing missing data. Each chapter stands alone in its coverage of a topic. The concluding chapters provide detailed advice on how to integrate these independent topics into an over-arching study development process and statistical analysis plan.

Sequential Analysis - Hypothesis Testing and Changepoint Detection (Paperback): Alexander Tartakovsky, Igor Nikiforov, Michele... Sequential Analysis - Hypothesis Testing and Changepoint Detection (Paperback)
Alexander Tartakovsky, Igor Nikiforov, Michele Basseville
R1,524 Discovery Miles 15 240 Ships in 12 - 17 working days

Sequential Analysis: Hypothesis Testing and Changepoint Detection systematically develops the theory of sequential hypothesis testing and quickest changepoint detection. It also describes important applications in which theoretical results can be used efficiently. The book reviews recent accomplishments in hypothesis testing and changepoint detection both in decision-theoretic (Bayesian) and non-decision-theoretic (non-Bayesian) contexts. The authors not only emphasize traditional binary hypotheses but also substantially more difficult multiple decision problems. They address scenarios with simple hypotheses and more realistic cases of two and finitely many composite hypotheses. The book primarily focuses on practical discrete-time models, with certain continuous-time models also examined when general results can be obtained very similarly in both cases. It treats both conventional i.i.d. and general non-i.i.d. stochastic models in detail, including Markov, hidden Markov, state-space, regression, and autoregression models. Rigorous proofs are given for the most important results. Written by leading authorities in the field, this book covers the theoretical developments and applications of sequential hypothesis testing and sequential quickest changepoint detection in a wide range of engineering and environmental domains. It explains how the theoretical aspects influence the hypothesis testing and changepoint detection problems as well as the design of algorithms.

Statistical Methods in Drug Combination Studies (Paperback): Wei Zhao, Harry Yang Statistical Methods in Drug Combination Studies (Paperback)
Wei Zhao, Harry Yang
R1,471 Discovery Miles 14 710 Ships in 12 - 17 working days

The growing interest in using combination drugs to treat various complex diseases has spawned the development of many novel statistical methodologies. The theoretical development, coupled with advances in statistical computing, makes it possible to apply these emerging statistical methods in in vitro and in vivo drug combination assessments. However, despite these advances, no book has served as a single source of information for statistical methods in drug combination research, nor has there been any guidance for experimental strategies. Statistical Methods in Drug Combination Studies fills that gap, covering all aspects of drug combination research, from designing in vitro drug combination studies to analyzing clinical trial data. Featuring contributions from researchers in industry, academia, and regulatory agencies, this comprehensive reference: Describes statistical models used to characterize dose-response patterns of monotherapies and evaluate the combination drug synergy Offers guidance for estimating interaction indices and constructing their associated confidence intervals to assess drug interaction Introduces a practical and innovative Bayesian approach to Phase I cancer trials, including actual trial examples to illustrate use Examines strategies in the fixed-dose combination therapy clinical development via case studies stemming from regulatory reviews Evaluates computational tools and software packages used to apply novel statistical methods in combination drug development Statistical Methods in Drug Combination Studies provides researchers with a solid understanding of the available statistical methods and computational tools and how to apply them in drug combination studies. The book is equally useful for statisticians to become better equipped to deal with drug combination study design and analysis in their practice.

The Inverse Gaussian Distribution - Theory: Methodology, and Applications (Paperback): Raj Chhikara The Inverse Gaussian Distribution - Theory: Methodology, and Applications (Paperback)
Raj Chhikara
R1,861 Discovery Miles 18 610 Ships in 12 - 17 working days

This monograph is a compilation of research on the inverse Gaussian distribution. It emphasizes the presentation of the statistical properties, methods, and applications of the two-parameter inverse Gaussian family of distribution. It is useful to statisticians and users of statistical distribution.

Design and Analysis of Clinical Trials for Predictive Medicine (Paperback): Shigeyuki Matsui, Marc Buyse, Richard Simon Design and Analysis of Clinical Trials for Predictive Medicine (Paperback)
Shigeyuki Matsui, Marc Buyse, Richard Simon
R1,555 Discovery Miles 15 550 Ships in 12 - 17 working days

Design and Analysis of Clinical Trials for Predictive Medicine provides statistical guidance on conducting clinical trials for predictive medicine. It covers statistical topics relevant to the main clinical research phases for developing molecular diagnostics and therapeutics-from identifying molecular biomarkers using DNA microarrays to confirming their clinical utility in randomized clinical trials. The foundation of modern clinical trials was laid many years before modern developments in biotechnology and genomics. Drug development in many diseases is now shifting to molecularly targeted treatment. Confronted with such a major break in the evolution toward personalized or predictive medicine, the methodologies for design and analysis of clinical trials is now evolving. This book is one of the first attempts to contribute to this evolution by laying a foundation for the use of appropriate statistical designs and methods in future clinical trials for predictive medicine. It is a useful resource for clinical biostatisticians, researchers focusing on predictive medicine, clinical investigators, translational scientists, and graduate biostatistics students.

Design & Analysis of Clinical Trials for Economic Evaluation & Reimbursement - An Applied Approach Using SAS & STATA... Design & Analysis of Clinical Trials for Economic Evaluation & Reimbursement - An Applied Approach Using SAS & STATA (Paperback)
Iftekhar Khan
R1,486 Discovery Miles 14 860 Ships in 12 - 17 working days

Economic evaluation has become an essential component of clinical trial design to show that new treatments and technologies offer value to payers in various healthcare systems. Although many books exist that address the theoretical or practical aspects of cost-effectiveness analysis, this book differentiates itself from the competition by detailing how to apply health economic evaluation techniques in a clinical trial context, from both academic and pharmaceutical/commercial perspectives. It also includes a special chapter for clinical trials in Cancer. Design & Analysis of Clinical Trials for Economic Evaluation & Reimbursement is not just about performing cost-effectiveness analyses. It also emphasizes the strategic importance of economic evaluation and offers guidance and advice on the complex factors at play before, during, and after an economic evaluation. Filled with detailed examples, the book bridges the gap between applications of economic evaluation in industry (mainly pharmaceutical) and what students may learn in university courses. It provides readers with access to SAS and STATA code. In addition, Windows-based software for sample size and value of information analysis is available free of charge-making it a valuable resource for students considering a career in this field or for those who simply wish to know more about applying economic evaluation techniques. The book includes coverage of trial design, case report form design, quality of life measures, sample sizes, submissions to regulatory authorities for reimbursement, Markov models, cohort models, and decision trees. Examples and case studies are provided at the end of each chapter. Presenting first-hand insights into how economic evaluations are performed from a drug development perspective, the book supplies readers with the foundation required to succeed in an environment where clinical trials and cost-effectiveness of new treatments are central. It also includes thought-provoking exercises for use in classroom and seminar discussions.

Applied Biclustering Methods for Big and High-Dimensional Data Using R (Paperback): Adetayo Kasim, Ziv Shkedy, Sebastian... Applied Biclustering Methods for Big and High-Dimensional Data Using R (Paperback)
Adetayo Kasim, Ziv Shkedy, Sebastian Kaiser, Sepp Hochreiter, Willem Talloen
R1,503 Discovery Miles 15 030 Ships in 12 - 17 working days

Proven Methods for Big Data Analysis As big data has become standard in many application areas, challenges have arisen related to methodology and software development, including how to discover meaningful patterns in the vast amounts of data. Addressing these problems, Applied Biclustering Methods for Big and High-Dimensional Data Using R shows how to apply biclustering methods to find local patterns in a big data matrix. The book presents an overview of data analysis using biclustering methods from a practical point of view. Real case studies in drug discovery, genetics, marketing research, biology, toxicity, and sports illustrate the use of several biclustering methods. References to technical details of the methods are provided for readers who wish to investigate the full theoretical background. All the methods are accompanied with R examples that show how to conduct the analyses. The examples, software, and other materials are available on a supplementary website.

Sufficient Dimension Reduction - Methods and Applications with R (Paperback): Bing Li Sufficient Dimension Reduction - Methods and Applications with R (Paperback)
Bing Li
R1,481 Discovery Miles 14 810 Ships in 12 - 17 working days

Sufficient dimension reduction is a rapidly developing research field that has wide applications in regression diagnostics, data visualization, machine learning, genomics, image processing, pattern recognition, and medicine, because they are fields that produce large datasets with a large number of variables. Sufficient Dimension Reduction: Methods and Applications with R introduces the basic theories and the main methodologies, provides practical and easy-to-use algorithms and computer codes to implement these methodologies, and surveys the recent advances at the frontiers of this field. Features Provides comprehensive coverage of this emerging research field. Synthesizes a wide variety of dimension reduction methods under a few unifying principles such as projection in Hilbert spaces, kernel mapping, and von Mises expansion. Reflects most recent advances such as nonlinear sufficient dimension reduction, dimension folding for tensorial data, as well as sufficient dimension reduction for functional data. Includes a set of computer codes written in R that are easily implemented by the readers. Uses real data sets available online to illustrate the usage and power of the described methods. Sufficient dimension reduction has undergone momentous development in recent years, partly due to the increased demands for techniques to process high-dimensional data, a hallmark of our age of Big Data. This book will serve as the perfect entry into the field for the beginning researchers or a handy reference for the advanced ones. The author Bing Li obtained his Ph.D. from the University of Chicago. He is currently a Professor of Statistics at the Pennsylvania State University. His research interests cover sufficient dimension reduction, statistical graphical models, functional data analysis, machine learning, estimating equations and quasilikelihood, and robust statistics. He is a fellow of the Institute of Mathematical Statistics and the American Statistical Association. He is an Associate Editor for The Annals of Statistics and the Journal of the American Statistical Association.

Handbook of Graphical Models (Paperback): Marloes Maathuis, Mathias Drton, Steffen Lauritzen, Martin Wainwright Handbook of Graphical Models (Paperback)
Marloes Maathuis, Mathias Drton, Steffen Lauritzen, Martin Wainwright
R1,906 Discovery Miles 19 060 Ships in 12 - 17 working days

A graphical model is a statistical model that is represented by a graph. The factorization properties underlying graphical models facilitate tractable computation with multivariate distributions, making the models a valuable tool with a plethora of applications. Furthermore, directed graphical models allow intuitive causal interpretations and have become a cornerstone for causal inference. While there exist a number of excellent books on graphical models, the field has grown so much that individual authors can hardly cover its entire scope. Moreover, the field is interdisciplinary by nature. Through chapters by leading researchers from different areas, this handbook provides a broad and accessible overview of the state of the art. Features: Contributions by leading researchers from a range of disciplines Structured in five parts, covering foundations, computational aspects, statistical inference, causal inference, and applications Balanced coverage of concepts, theory, methods, examples, and applications Chapters can be read mostly independently, while cross-references highlight connections The handbook is targeted at a wide audience, including graduate students, applied researchers, and experts in graphical models.

Parallel Computing for Data Science - With Examples in R, C++ and CUDA (Paperback): Norman Matloff Parallel Computing for Data Science - With Examples in R, C++ and CUDA (Paperback)
Norman Matloff
R1,487 Discovery Miles 14 870 Ships in 12 - 17 working days

Parallel Computing for Data Science: With Examples in R, C++ and CUDA is one of the first parallel computing books to concentrate exclusively on parallel data structures, algorithms, software tools, and applications in data science. It includes examples not only from the classic "n observations, p variables" matrix format but also from time series, network graph models, and numerous other structures common in data science. The examples illustrate the range of issues encountered in parallel programming. With the main focus on computation, the book shows how to compute on three types of platforms: multicore systems, clusters, and graphics processing units (GPUs). It also discusses software packages that span more than one type of hardware and can be used from more than one type of programming language. Readers will find that the foundation established in this book will generalize well to other languages, such as Python and Julia.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Time Series Analysis - With Applications…
Jonathan D. Cryer, Kung-Sik Chan Hardcover R2,849 Discovery Miles 28 490
Statistics for The Behavioral Sciences
Larry Wallnau, Frederick Gravetter Paperback R1,361 R1,219 Discovery Miles 12 190
Time Series Analysis - Univariate and…
William Wei Paperback R3,009 Discovery Miles 30 090
Probability and Statistics - Pearson New…
Morris DeGroot, Mark Schervish Paperback R2,360 Discovery Miles 23 600
Biostatistics for the Biological and…
Mario Triola, Marc Triola, … Paperback R2,441 R2,223 Discovery Miles 22 230
The Analysis of Biological Data
Michael C Whitlock, Dolph Schluter Hardcover R2,171 Discovery Miles 21 710
Pearson Edexcel AS and A level Further…
Paperback  (1)
R901 Discovery Miles 9 010
Basic mathematics for economics students…
Derek Yu Paperback R345 R319 Discovery Miles 3 190
Rationality - What It Is, Why It Seems…
Steven Pinker Paperback R380 R297 Discovery Miles 2 970
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim Paperback R969 R770 Discovery Miles 7 700

 

Partners