![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Probability & statistics
Statistical Models in Toxicology presents an up-to-date and comprehensive account of statistical theory topics that occur in toxicology. The attention given by statisticians to the problem of health risk estimation for environmental and occupational exposures in the last few decades has created excitement and optimism among both statisticians and toxicologists. The development of modern statistical techniques with solid mathematical foundations in the twentieth century and the advent of modern computers in the latter part of the century gave way to the development of many statistical models and methods to describe toxicological processes and attempts to solve the associated problems. Not only have the models enjoyed a high level of elegance and sophistication mathematically, but they are widely used by industry and government regulatory agencies. Features: Focuses on describing the statistical models in environmental toxicology that facilitate the assessment of risk mainly in humans. The properties and shortfalls of each model are discussed, and its impact in the process of risk assessment is examined. Discusses models that assess the risk of mixtures of chemicals. Presents statistical models that are developed for risk estimation in different aspects of environmental toxicology including cancer and carcinogenic substances. Includes models for developmental and reproductive toxicity risk assessment, risk assessment in continuous outcomes, and developmental neurotoxicity. Contains numerous examples and exercises. Statistical Models in Toxicology introduces a wide variety of statistical models that are currently utilized for dose-response modeling and risk analysis. These models are often developed based on design and regulatory guidelines of toxicological experiments. The book is suitable for practitioners or it can be used as a textbook for advanced undergraduate or graduate students of mathematics and statistics.
Multiple Imputation of Missing Data in Practice: Basic Theory and Analysis Strategies provides a comprehensive introduction to the multiple imputation approach to missing data problems that are often encountered in data analysis. Over the past 40 years or so, multiple imputation has gone through rapid development in both theories and applications. It is nowadays the most versatile, popular, and effective missing-data strategy that is used by researchers and practitioners across different fields. There is a strong need to better understand and learn about multiple imputation in the research and practical community. Accessible to a broad audience, this book explains statistical concepts of missing data problems and the associated terminology. It focuses on how to address missing data problems using multiple imputation. It describes the basic theory behind multiple imputation and many commonly-used models and methods. These ideas are illustrated by examples from a wide variety of missing data problems. Real data from studies with different designs and features (e.g., cross-sectional data, longitudinal data, complex surveys, survival data, studies subject to measurement error, etc.) are used to demonstrate the methods. In order for readers not only to know how to use the methods, but understand why multiple imputation works and how to choose appropriate methods, simulation studies are used to assess the performance of the multiple imputation methods. Example datasets and sample programming code are either included in the book or available at a github site (https://github.com/he-zhang-hsu/multiple_imputation_book). Key Features Provides an overview of statistical concepts that are useful for better understanding missing data problems and multiple imputation analysis Provides a detailed discussion on multiple imputation models and methods targeted to different types of missing data problems (e.g., univariate and multivariate missing data problems, missing data in survival analysis, longitudinal data, complex surveys, etc.) Explores measurement error problems with multiple imputation Discusses analysis strategies for multiple imputation diagnostics Discusses data production issues when the goal of multiple imputation is to release datasets for public use, as done by organizations that process and manage large-scale surveys with nonresponse problems For some examples, illustrative datasets and sample programming code from popular statistical packages (e.g., SAS, R, WinBUGS) are included in the book. For others, they are available at a github site (https://github.com/he-zhang-hsu/multiple_imputation_book)
Potential readers include those who wish to build up mathematical bases to deal with high-frequency data in mathematical finance and those who wish to learn the theoretical background for Cox's regression model in survival analysis Intuitive interpretations and concrete usages of fundamental theorems in martingale theory Relatively new theorems in asymptotic statistics presented in a completely self-contained way. Highlight of the monograph is Chapters 8-10 dealing with Z-estimators and related topics
Nanohertz Gravitational Wave Astronomy explores the exciting hunt for low frequency gravitational waves by using the extraordinary timing precision of pulsars. The book takes the reader on a tour across the expansive gravitational-wave landscape, from LIGO detections to the search for polarization patterns in the Cosmic Microwave Background, then hones in on the band of nanohertz frequencies that Pulsar Timing Arrays (PTAs) are sensitive to. Within this band may lie many pairs of the most massive black holes in the entire Universe, all radiating in chorus to produce a background of gravitational waves. The book shows how such extra-Galactic gravitational waves can alter the arrival times of radio pulses emanating from monitored Galactic pulsars, and how we can use the pattern of correlated timing deviations from many pulsars to tease out the elusive signal. The book takes a pragmatic approach to data analysis, explaining how it is performed in practice within classical and Bayesian statistics, as well as the numerous strategies one can use to optimize numerical Bayesian searches in PTA analyses. It closes with a complete discussion of the data model for nanohertz gravitational wave searches, and an overview of the past achievements, present efforts, and future prospects for PTAs. The book is accessible to upper division undergraduate students and graduate students of astronomy, and also serves as a useful desk reference for experts in the field. Key features: Contains a complete derivation of the pulsar timing response to gravitational waves, and the overlap reduction function for PTAs. Presents a comprehensive overview of source astrophysics, and the dynamical influences that shape the gravitational wave signals that PTAs are sensitive to. Serves as a detailed primer on gravitational-wave data analysis and numerical Bayesian techniques for PTAs.
Due to recent theoretical findings and advances in statistical computing, there has been a rapid development of techniques and applications in the area of missing data analysis. Statistical Methods for Handling Incomplete Data covers the most up-to-date statistical theories and computational methods for analyzing incomplete data. Features Uses the mean score equation as a building block for developing the theory for missing data analysis Provides comprehensive coverage of computational techniques for missing data analysis Presents a rigorous treatment of imputation techniques, including multiple imputation fractional imputation Explores the most recent advances of the propensity score method and estimation techniques for nonignorable missing data Describes a survey sampling application Updated with a new chapter on Data Integration Now includes a chapter on Advanced Topics, including kernel ridge regression imputation and neural network model imputation The book is primarily aimed at researchers and graduate students from statistics, and could be used as a reference by applied researchers with a good quantitative background. It includes many real data examples and simulated examples to help readers understand the methodologies.
Nanohertz Gravitational Wave Astronomy explores the exciting hunt for low frequency gravitational waves by using the extraordinary timing precision of pulsars. The book takes the reader on a tour across the expansive gravitational-wave landscape, from LIGO detections to the search for polarization patterns in the Cosmic Microwave Background, then hones in on the band of nanohertz frequencies that Pulsar Timing Arrays (PTAs) are sensitive to. Within this band may lie many pairs of the most massive black holes in the entire Universe, all radiating in chorus to produce a background of gravitational waves. The book shows how such extra-Galactic gravitational waves can alter the arrival times of radio pulses emanating from monitored Galactic pulsars, and how we can use the pattern of correlated timing deviations from many pulsars to tease out the elusive signal. The book takes a pragmatic approach to data analysis, explaining how it is performed in practice within classical and Bayesian statistics, as well as the numerous strategies one can use to optimize numerical Bayesian searches in PTA analyses. It closes with a complete discussion of the data model for nanohertz gravitational wave searches, and an overview of the past achievements, present efforts, and future prospects for PTAs. The book is accessible to upper division undergraduate students and graduate students of astronomy, and also serves as a useful desk reference for experts in the field. Key features: Contains a complete derivation of the pulsar timing response to gravitational waves, and the overlap reduction function for PTAs. Presents a comprehensive overview of source astrophysics, and the dynamical influences that shape the gravitational wave signals that PTAs are sensitive to. Serves as a detailed primer on gravitational-wave data analysis and numerical Bayesian techniques for PTAs.
1) Focuses on the concepts and implementation strategies of various Deep Learning algorithms through properly curated examples. 2) The subject area will be valid for the next 10 years or so, as Deep Learning theory/algorithms and their applications will not be outdated easily. Hence there will be demand for such a book in the market. 3) In comparison to other titles, this book rigorously covers mathematical and conceptual details of relevant topics.
This fourth edition contains several additions. The main ones con cern three closely related topics: Brownian motion, functional limit distributions, and random walks. Besides the power and ingenuity of their methods and the depth and beauty of their results, their importance is fast growing in Analysis as well as in theoretical and applied Proba bility. These additions increased the book to an unwieldy size and it had to be split into two volumes. About half of the first volume is devoted to an elementary introduc tion, then to mathematical foundations and basic probability concepts and tools. The second half is devoted to a detailed study of Independ ence which played and continues to playa central role both by itself and as a catalyst. The main additions consist of a section on convergence of probabilities on metric spaces and a chapter whose first section on domains of attrac tion completes the study of the Central limit problem, while the second one is devoted to random walks. About a third of the second volume is devoted to conditioning and properties of sequences of various types of dependence. The other two thirds are devoted to random functions; the last Part on Elements of random analysis is more sophisticated. The main addition consists of a chapter on Brownian motion and limit distributions."
Cyberspace is changing the face of crime. For criminals it has become a place for rich collaboration and learning, not just within one country; and a place where new kinds of crimes can be carried out, and a vehicle for committing conventional crimes with unprecedented range, scale, and speed. Law enforcement faces a challenge in keeping up and dealing with this new environment. The news is not all bad - collecting and analyzing data about criminals and their activities can provide new levels of insight into what they are doing and how they are doing it. However, using data analytics requires a change of process and new skills that (so far) many law enforcement organizations have had difficulty leveraging. Cyberspace, Data Analytics, and Policing surveys the changes that cyberspace has brought to criminality and to policing with enough technical content to expose the issues and suggest ways in which law enforcement organizations can adapt. Key Features: Provides a non-technical but robust overview of how cyberspace enables new kinds of crime and changes existing crimes. Describes how criminals exploit the ability to communicate globally to learn, form groups, and acquire cybertools. Describes how law enforcement can use the ability to collect data and apply analytics to better protect society and to discover and prosecute criminals. Provides examples from open-source data of how hot spot and intelligence-led policing can benefit law enforcement. Describes how law enforcement can exploit the ability to communicate globally to collaborate in dealing with trans-national crime.
Coherent treatment of a variety of approaches to multiple comparisons Broad coverage of topics, with contributions by internationally leading experts Detailed treatment of applications in medicine and life sciences Suitable for researchers, lecturers / students, and practitioners
Confidence Intervals for Discrete Data in Clinical Research is designed as a toolbox for biomedical researchers. Analysis of discrete data is one of the most used yet vexing areas in clinical research. The array of methodologies available in the literature to address the inferential questions for binomial and multinomial data can be a double-edged sword. On the one hand, these methods open a rich avenue of exploration of data; on the other, the wide-ranging and competing methodologies potentially lead to conflicting inferences, adding to researchers' confusion and frustration and also leading to reporting bias. This book addresses the problems that many practitioners experience in choosing and implementing fit for purpose data analysis methods to answer critical inferential questions for binomial and count data. The book is an outgrowth of the authors' collective experience in biomedical research and provides an excellent overview of inferential questions of interest for binomial proportions and rates based on count data, and reviews various solutions to these problems available in the literature. Each chapter discusses the strengths and weaknesses of the methods and suggests practical recommendations. The book's primary focus is on applications in clinical research, and the goal is to provide direct benefit to the users involved in the biomedical field.
Mathematics instructors are always looking for ways to engage students in meaningful and authentic tasks that utilize mathematics. At the same time, it is crucial for a democratic society to have a citizenry who can critically discriminate between "fake" and reliable news reports involving numeracy and apply numerical literacy to local and global issues. This book contains examples of topics linking math and social justice and addresses both goals. There is a broad range of mathematics used, including statistical methods, modeling, calculus, and basic algebra. The range of social issues is also diverse, including racial injustice, mass incarceration, income inequality, and environmental justice. There are lesson plans appropriate in many contexts: service-learning courses, quantitative literacy/reasoning courses, introductory courses, and classes for math majors. What makes this book unique and timely is that the most previous curricula linking math and social justice have been treated from a humanist perspective. This book is written by mathematicians, for mathematics students. Admittedly, it can be intimidating for instructors trained in quantitative methods to venture into the arena of social dilemmas. This volume provides encouragement, support, and a treasure trove of ideas to get you started. The chapters in this book were originally published as a special issue of the journal, PRIMUS: Problems, Resources, and Issues in Mathematics Undergraduate Studies.
Introduction to Robust Estimating and Hypothesis Testing, Fifth Edition is a useful 'how-to' on the application of robust methods utilizing easy-to-use software. This trusted resource provides an overview of modern robust methods, including improved techniques for dealing with outliers, skewed distribution curvature, and heteroscedasticity that can provide substantial gains in power. Coverage includes techniques for comparing groups and measuring effect size, current methods for comparing quantiles, and expanded regression methods for both parametric and nonparametric techniques. The practical importance of these varied methods is illustrated using data from real world studies. Over 1700 R functions are included to support comprehension and practice.
Large biological data, which are often noisy and high-dimensional, have become increasingly prevalent in biology and medicine. There is a real need for good training in statistics, from data exploration through to analysis and interpretation. This book provides an overview of statistical and dimension reduction methods for high-throughput biological data, with a specific focus on data integration. It starts with some biological background, key concepts underlying the multivariate methods, and then covers an array of methods implemented using the mixOmics package in R. Features: Provides a broad and accessible overview of methods for multi-omics data integration Covers a wide range of multivariate methods, each designed to answer specific biological questions Includes comprehensive visualisation techniques to aid in data interpretation Includes many worked examples and case studies using real data Includes reproducible R code for each multivariate method, using the mixOmics package The book is suitable for researchers from a wide range of scientific disciplines wishing to apply these methods to obtain new and deeper insights into biological mechanisms and biomedical problems. The suite of tools introduced in this book will enable students and scientists to work at the interface between, and provide critical collaborative expertise to, biologists, bioinformaticians, statisticians and clinicians.
Features: Covers all types of PDEs, namely, elliptic (Laplace's, Helmholtz, modified Helmholtz, biharmonic, Stokes), parabolic (heat, convection-reaction-diffusion) and hyperbolic (wave) Excellent reference for post-graduates and researchers in mathematics, engineering, and any other scientific disciplines that deal with inverse problems Contains both theory and numerical algorithms for solving all types of inverse and ill-posed problems.
With an emphasis on social science applications, Event History Analysis with R, Second Edition, presents an introduction to survival and event history analysis using real-life examples. Since publication of the first edition, focus in the field has gradually shifted towards the analysis of large and complex datasets. This has led to new ways of tabulating and analysing tabulated data with the same precision and power as that of an analysis of the full data set. Tabulation also makes it possible to share sensitive data with others without violating integrity. The new edition extends on the content of the first by both improving on already given methods and introducing new methods. There are two new chapters, Explanatory Variables and Regression, and Register- Based Survival Data Models. The book has been restructured to improve the flow, and there are significant updates to the computing in the supporting R package. Features * Introduction to survival and event history analysis and how to solve problems with incomplete data using Cox regression. * Parametric proportional hazards models, including the Weibull, Exponential, Extreme Value, and Gompertz distributions. * Parametric accelerated failure time models with the Lognormal, Loglogistic, Gompertz, Exponential, Extreme Value, and Weibull distributions. * Proportional hazards models for occurrence/exposure data, useful with tabular and register based data, often with a huge amount of observed events. * Special treatments of external communal covariates, selections from the Lexis diagram, and creating period as well as cohort statistics. * "Weird bootstrap" sampling suitable for Cox regression with small to medium-sized data sets. * Supported by an R package (https://CRAN.R-project.org/package=eha), including code and data for most examples in the book. * A dedicated home page for the book at http://ehar.se/r/ehar2 This substantial update to this popular book remains an excellent resource for researchers and practitioners of applied event history analysis and survival analysis. It can be used as a text for a course for graduate students or for self-study.
Thoroughly updated throughout, this second edition will continue to be about the practicable methods of statistical applications for engineers, and as well for scientists and those in business. It remains a what-I-wish-I-had-known-when-starting-my-career compilation of techniques. Contrasting a mathematical and abstract orientation of many statistics texts, which expresses the science/math values of researchers, this book has its focus on the application to concrete examples and the interpretation of outcomes. Supporting application propriety, this book also presents the fundamental concepts, provides supporting derivation, and has frequent do and not-do notes. Key Features: Contains details of the computation for the examples. Includes new examples and exercises. Includes expanded topics supporting data analysis. The book is for upper-level undergraduate or graduate students in engineering, the hard sciences, or business programs. The intent is that the text would continue to be useful in professional life, and appropriate as a self-learning tool after graduation - whether in graduate school or in professional practice. Errata can be found here
Covers deep learning fundamentals; Focuses on applications; Covers human emotion analysis and deep learning; Explains how to use web based techniques for deep learning applications; Includes coverage of autonomous vehicles and deep learning
Public-Private Partnerships (PPP or 3Ps) allow the public sector to seek alternative funding and expertise from the private sector during procurement processes. Such partnerships, if executed with due diligence, often benefit the public immensely. Unfortunately, Public-Private Partnerships can be vulnerable to corruption. This book looks at what measures we can put in place to check corruption during procurement and what good governance strategies the public sector can adopt to improve the performance of 3Ps. The book applies mathematical models to analyze 3Ps. It uses game theory to study the interaction and dynamics between the stakeholders and suggests strategies to reduce corruption risks in various 3Ps stages. The authors explain through game theory-based simulation how governments can adopt a evaluating process at the start of each procurement to weed out undesirable private partners and why the government should take a more proactive approach. Using a methodological framework rooted in mathematical models to illustrate how we can combat institutional corruption, this book is a helpful reference for anyone interested in public policymaking and public infrastructure management.
This book presents fundamental concepts of optimization problems and its real-world applications in various fields. The core concepts of optimization, formulations and solution procedures of various real-world problems are provided in an easy-to-read manner. The unique feature of this book is that it presents unified knowledge of the modelling of real-world decision-making problems and provides the solution procedure using the appropriate optimization techniques. The book will help students, researchers, and faculty members to understand the need for optimization techniques for obtaining optimal solution for the decision-making problems. It provides a sound knowledge of modelling of real-world problems using optimization techniques. It is a valuable compendium of several optimization techniques for solving real-world application problems using optimization software LINGO. The book is useful for academicians, practitioners, students and researchers in the field of OR. It is written in simple language with a detailed explanation of the core concepts of optimization techniques. Readers of this book will understand the formulation of real-world problems and their solution procedures obtained using the appropriate optimization techniques.
Measurement error arises ubiquitously in applications and has been of long-standing concern in a variety of fields, including medical research, epidemiological studies, economics, environmental studies, and survey research. While several research monographs are available to summarize methods and strategies of handling different measurement error problems, research in this area continues to attract extensive attention. The Handbook of Measurement Error Models provides overviews of various topics on measurement error problems. It collects carefully edited chapters concerning issues of measurement error and evolving statistical methods, with a good balance of methodology and applications. It is prepared for readers who wish to start research and gain insights into challenges, methods, and applications related to error-prone data. It also serves as a reference text on statistical methods and applications pertinent to measurement error models, for researchers and data analysts alike. Features: Provides an account of past development and modern advancement concerning measurement error problems Highlights the challenges induced by error-contaminated data Introduces off-the-shelf methods for mitigating deleterious impacts of measurement error Describes state-of-the-art strategies for conducting in-depth research
Focusing on the importance of the application of statistical techniques, this book covers the design of experiments and stochastic modeling in textile engineering. Textile Engineering: Statistical Techniques, Design of Experiments and Stochastic Modeling focuses on the analysis and interpretation of textile data for improving the quality of textile processes and products using various statistical techniques. FEATURES Explores probability, random variables, probability distribution, estimation, significance test, ANOVA, acceptance sampling, control chart, regression and correlation, design of experiments and stochastic modeling pertaining to textiles Presents step-by-step mathematical derivations Includes MATLAB (R) codes for solving various numerical problems Consists of case studies, practical examples and homework problems in each chapter This book is aimed at graduate students, researchers and professionals in textile engineering, textile clothing, textile management and industrial engineering. This book is equally useful for learners and practitioners in other scientific and technological domains.
Provides an overview and background of cost-effectiveness analysis and how it's used Discusses cost-effective in relation to systems engineering Links cost-effectiveness with military issues and problems Explores the usage of cost-effectiveness as it relates to systems architecting and the re-engineering of office systems Compares cost-effective analysis to everyday life when dealing with purchasing small home devices such as phones, and large devices such as automobiles.
Extensive code examples in R, Stata, and Python Chapters on overlooked topics in econometrics classes: heterogeneous treatment effects, simulation and power analysis, new cutting-edge methods, and uncomfortable ignored assumptions An easy-to-read conversational tone Up-to-date coverage of methods with fast-moving literatures like difference-in-differences
Data science is an emerging field and innovations in it need to be explored for the success of society 5.0. This book not only focuses on the practical applications of data science to achieve computational excellence, but also digs deep into the issues and implications of intelligent systems. This book highlights innovations in data science to achieve computational excellence that can optimize performance of smart applications. The book focuses on methodologies, framework, design issues, tools, architectures, and technologies necessary to develop and understand data science and its emerging applications in the present era. Data Science and Innovations for Intelligent Systems: Computational Excellence and Society 5.0 is useful for the research community, start-up entrepreneurs, academicians, data-centered industries, and professeurs who are interested in exploring innovations in varied applications and the areas of data science. |
![]() ![]() You may like...
New Approaches in Biomedical…
Katrin Kneipp, Ricardo Aroca, …
Hardcover
R3,324
Discovery Miles 33 240
Before You Divorce - Read This
Matthew N.O Sadiku, Janet O Sadiku
Hardcover
R1,150
Discovery Miles 11 500
|