![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Probability & statistics
Mathematical models are used to convert real-life problems using mathematical concepts and language. These models are governed by differential equations whose solutions make it easy to understand real-life problems and can be applied to engineering and science disciplines. This book presents numerical methods for solving various mathematical models. This book offers real-life applications, includes research problems on numerical treatment, and shows how to develop the numerical methods for solving problems. The book also covers theory and applications in engineering and science. Engineers, mathematicians, scientists, and researchers working on real-life mathematical problems will find this book useful.
Most applications generate large datasets, like social networking and social influence programs, smart cities applications, smart house environments, Cloud applications, public web sites, scientific experiments and simulations, data warehouse, monitoring platforms, and e-government services. Data grows rapidly, since applications produce continuously increasing volumes of both unstructured and structured data. Large-scale interconnected systems aim to aggregate and efficiently exploit the power of widely distributed resources. In this context, major solutions for scalability, mobility, reliability, fault tolerance and security are required to achieve high performance and to create a smart environment. The impact on data processing, transfer and storage is the need to re-evaluate the approaches and solutions to better answer the user needs. A variety of solutions for specific applications and platforms exist so a thorough and systematic analysis of existing solutions for data science, data analytics, methods and algorithms used in Big Data processing and storage environments is significant in designing and implementing a smart environment. Fundamental issues pertaining to smart environments (smart cities, ambient assisted leaving, smart houses, green houses, cyber physical systems, etc.) are reviewed. Most of the current efforts still do not adequately address the heterogeneity of different distributed systems, the interoperability between them, and the systems resilience. This book will primarily encompass practical approaches that promote research in all aspects of data processing, data analytics, data processing in different type of systems: Cluster Computing, Grid Computing, Peer-to-Peer, Cloud/Edge/Fog Computing, all involving elements of heterogeneity, having a large variety of tools and software to manage them. The main role of resource management techniques in this domain is to create the suitable frameworks for development of applications and deployment in smart environments, with respect to high performance. The book focuses on topics covering algorithms, architectures, management models, high performance computing techniques and large-scale distributed systems.
This book examines Multi-Criteria Decision Modelling (MCDM) methodologies and facilitates diverse ways for strategic decision-making in a variety of practical applications. This book also provides a pragmatic foundation for solving real-life problems in different scenarios of emerging global markets. Multi-Criteria Decision Modelling: Applicational Techniques and Case Studies depicts the use of sensitivity analysis and modelling and includes case studies to understand and illustrate challenging concepts. It also offers step-by-step comprehensive methodologies for the utilization of MCDM to a variety of situations. The book deliberates ways for companies to use these methods to their advantage in order to achieve sustainability. Furthermore, it also presents an overview of the major streams of thought and provides a holistic view of the latest research and development trends in modelling and optimization. FEATURES Offers a stepwise comprehensive methodology for the application of MCDM to a variety of situations Presents an overview of the major streams of thought present in the MCDM technique Provides a holistic view of the latest research and development trends in the emerging markets in terms of modelling and optimization using MCDM for different industrial sectors Illuminates a practical foundation in order to provide a guide to address the problems of emerging markets Enlightens the ways for companies to use these methods to their advantage to be able to achieve sustainability This book is a guide for those performing decision analysis for academic purposes as well as for researchers aspiring to expand their knowledge on MCDM problem solving.
Dynamic Treatment Regimes: Statistical Methods for Precision Medicine provides a comprehensive introduction to statistical methodology for the evaluation and discovery of dynamic treatment regimes from data. Researchers and graduate students in statistics, data science, and related quantitative disciplines with a background in probability and statistical inference and popular statistical modeling techniques will be prepared for further study of this rapidly evolving field. A dynamic treatment regime is a set of sequential decision rules, each corresponding to a key decision point in a disease or disorder process, where each rule takes as input patient information and returns the treatment option he or she should receive. Thus, a treatment regime formalizes how a clinician synthesizes patient information and selects treatments in practice. Treatment regimes are of obvious relevance to precision medicine, which involves tailoring treatment selection to patient characteristics in an evidence-based way. Of critical importance to precision medicine is estimation of an optimal treatment regime, one that, if used to select treatments for the patient population, would lead to the most beneficial outcome on average. Key methods for estimation of an optimal treatment regime from data are motivated and described in detail. A dedicated companion website presents full accounts of application of the methods using a comprehensive R package developed by the authors. The authors' website www.dtr-book.com includes updates, corrections, new papers, and links to useful websites.
Extensive code examples in R, Stata, and Python Chapters on overlooked topics in econometrics classes: heterogeneous treatment effects, simulation and power analysis, new cutting-edge methods, and uncomfortable ignored assumptions An easy-to-read conversational tone Up-to-date coverage of methods with fast-moving literatures like difference-in-differences
Little known to many, R works just as well with JavaScript-this book delves into the various ways both languages can work together. The ultimate aim of this work is to put the reader at ease with inviting JavaScript in their data science workflow. In that respect the book is not teaching one JavaScript but rather we show how little JavaScript can greatly support and enhance R code. Therefore, the focus is on integrating external JavaScript libraries and no prior knowledge of JavaScript is required. Key Features: Easy to pick up. An entry way to learning JavaScript for R. Covers topics not covered anywhere else. Easy to follow along.
Primarily aimed at researchers and postgraduates, but may be of interest to some professionals working in related fields, such as the insurance industry Suitable as supplementary reading for a standard course in applied probability Requires minimal prerequisites in mathematical analysis and probability theory
Data Analytics and Visualization in Quality Analysis using Tableau goes beyond the existing quality statistical analysis. It helps quality practitioners perform effective quality control and analysis using Tableau, a user-friendly data analytics and visualization software. It begins with a basic introduction to quality analysis with Tableau including differentiating factors from other platforms. It is followed by a description of features and functions of quality analysis tools followed by step-by-step instructions on how to use Tableau. Further, quality analysis through Tableau based on open source data is explained based on five case studies. Lastly, it systematically describes the implementation of quality analysis through Tableau in an actual workplace via a dashboard example. Features: Describes a step-by-step method of Tableau to effectively apply data visualization techniques in quality analysis Focuses on a visualization approach for practical quality analysis Provides comprehensive coverage of quality analysis topics using state-of-the-art concepts and applications Illustrates pragmatic implementation methodology and instructions applicable to real-world and business cases Include examples of ready-to-use templates of customizable Tableau dashboards This book is aimed at professionals, graduate students and senior undergraduate students in industrial systems and quality engineering, process engineering, systems engineering, quality control, quality assurance and quality analysis.
Little known to many, R works just as well with JavaScript-this book delves into the various ways both languages can work together. The ultimate aim of this work is to put the reader at ease with inviting JavaScript in their data science workflow. In that respect the book is not teaching one JavaScript but rather we show how little JavaScript can greatly support and enhance R code. Therefore, the focus is on integrating external JavaScript libraries and no prior knowledge of JavaScript is required. Key Features: Easy to pick up. An entry way to learning JavaScript for R. Covers topics not covered anywhere else. Easy to follow along.
Geometric Data Analysis designates the approach of Multivariate Statistics that conceptualizes the set of observations as a Euclidean cloud of points. Combinatorial Inference in Geometric Data Analysis gives an overview of multidimensional statistical inference methods applicable to clouds of points that make no assumption on the process of generating data or distributions, and that are not based on random modelling but on permutation procedures recasting in a combinatorial framework. It focuses particularly on the comparison of a group of observations to a reference population (combinatorial test) or to a reference value of a location parameter (geometric test), and on problems of homogeneity, that is the comparison of several groups for two basic designs. These methods involve the use of combinatorial procedures to build a reference set in which we place the data. The chosen test statistics lead to original extensions, such as the geometric interpretation of the observed level, and the construction of a compatibility region. Features: Defines precisely the object under study in the context of multidimensional procedures, that is clouds of points Presents combinatorial tests and related computations with R and Coheris SPAD software Includes four original case studies to illustrate application of the tests Includes necessary mathematical background to ensure it is self-contained This book is suitable for researchers and students of multivariate statistics, as well as applied researchers of various scientific disciplines. It could be used for a specialized course taught at either master or PhD level.
Bayesian Statistical Methods provides data scientists with the foundational and computational tools needed to carry out a Bayesian analysis. This book focuses on Bayesian methods applied routinely in practice including multiple linear regression, mixed effects models and generalized linear models (GLM). The authors include many examples with complete R code and comparisons with analogous frequentist procedures. In addition to the basic concepts of Bayesian inferential methods, the book covers many general topics: Advice on selecting prior distributions Computational methods including Markov chain Monte Carlo (MCMC) Model-comparison and goodness-of-fit measures, including sensitivity to priors Frequentist properties of Bayesian methods Case studies covering advanced topics illustrate the flexibility of the Bayesian approach: Semiparametric regression Handling of missing data using predictive distributions Priors for high-dimensional regression models Computational techniques for large datasets Spatial data analysis The advanced topics are presented with sufficient conceptual depth that the reader will be able to carry out such analysis and argue the relative merits of Bayesian and classical methods. A repository of R code, motivating data sets, and complete data analyses are available on the book's website. Brian J. Reich, Associate Professor of Statistics at North Carolina State University, is currently the editor-in-chief of the Journal of Agricultural, Biological, and Environmental Statistics and was awarded the LeRoy & Elva Martin Teaching Award. Sujit K. Ghosh, Professor of Statistics at North Carolina State University, has over 22 years of research and teaching experience in conducting Bayesian analyses, received the Cavell Brownie mentoring award, and served as the Deputy Director at the Statistical and Applied Mathematical Sciences Institute.
The second edition of Robust Statistical Methods with R provides a systematic treatment of robust procedures with an emphasis on new developments and on the computational aspects. There are many numerical examples and notes on the R environment, and the updated chapter on the multivariate model contains additional material on visualization of multivariate data in R. A new chapter on robust procedures in measurement error models concentrates mainly on the rank procedures, less sensitive to errors than other procedures. This book will be an invaluable resource for researchers and postgraduate students in statistics and mathematics. Features * Provides a systematic, practical treatment of robust statistical methods * Offers a rigorous treatment of the whole range of robust methods, including the sequential versions of estimators, their moment convergence, and compares their asymptotic and finite-sample behavior * The extended account of multivariate models includes the admissibility, shrinkage effects and unbiasedness of two-sample tests * Illustrates the small sensitivity of the rank procedures in the measurement error model * Emphasizes the computational aspects, supplies many examples and illustrations, and provides the own procedures of the authors in the R software on the book's website
Data Stewardship for Open Science: Implementing FAIR Principles has been written with the intention of making scientists, funders, and innovators in all disciplines and stages of their professional activities broadly aware of the need, complexity, and challenges associated with open science, modern science communication, and data stewardship. The FAIR principles are used as a guide throughout the text, and this book should leave experimentalists consciously incompetent about data stewardship and motivated to respect data stewards as representatives of a new profession, while possibly motivating others to consider a career in the field. The ebook, avalable for no additional cost when you buy the paperback, will be updated every 6 months on average (providing that significant updates are needed or avaialble). Readers will have the opportunity to contribute material towards these updates, and to develop their own data management plans, via the free Data Stewardship Wizard.
This book studies R. Buckminster Fuller's World Game and similar world games, past and present. Proposed by Fuller in 1964 and first played in colleges and universities across North America at a time of growing ecological crisis, the World Game attempted to turn data analysis, systems modelling, scenario building, computer technology, and information design to more egalitarian ends to meet human needs. It challenged players to redistribute finite planetary resources more equitably, to 'make the world work'. Criticised and lauded in equal measure, the World Game has evolved through several formats and continues today in correspondence with debates on planetary stewardship, gamification, data management, and the democratic deficit. This book looks again at how the World Game has been played, focusing on its architecture, design, and gameplay. With hindsight, the World Game might appear naive, utopian, or technocratic, but we share its problems, if not necessarily its solutions. Such a study will be of interest to scholars working in art history, design history, game studies, media studies, architecture, and the environmental humanities.
A Journey into Open Science and Research Transparency in Psychology introduces the open science movement from psychology through a narrative that integrates song lyrics, national parks, and concerns about diversity, social justice, and sustainability. Along the way, readers receive practical guidance on how to plan and share their research, matching the ideals of scientific transparency. This book considers all the fundamental topics related to the open science movement, including: (a) causes of and responses to the Replication Crisis, (b) crowdsourcing and meta-science research, (c) preregistration, (d) statistical approaches, (e) questionable research practices, (f) research and publication ethics, (g) connections to career topics, (h) finding open science resources, (i) how open science initiatives promote diverse, just, and sustainable outcomes, and (j) the path moving forward. Each topic is introduced using terminology and language aimed at intermediate-level college students who have completed research methods courses. But the book invites all readers to reconsider their research approach and join the Scientific Revolution 2.0. Each chapter describes the associated content and includes exercises intended to help readers plan, conduct, and share their research. This short book is intended as a supplemental text for research methods courses or just a fun and informative exploration of the fundamental topics associated with the Replication Crisis in psychology and the resulting movement to increase scientific transparency in methods.
Analysis of Variance, Design, and Regression: Linear Modeling for Unbalanced Data, Second Edition presents linear structures for modeling data with an emphasis on how to incorporate specific ideas (hypotheses) about the structure of the data into a linear model for the data. The book carefully analyzes small data sets by using tools that are easily scaled to big data. The tools also apply to small relevant data sets that are extracted from big data. New to the Second Edition Reorganized to focus on unbalanced data Reworked balanced analyses using methods for unbalanced data Introductions to nonparametric and lasso regression Introductions to general additive and generalized additive models Examination of homologous factors Unbalanced split plot analyses Extensions to generalized linear models R, Minitab (R), and SAS code on the author's website The text can be used in a variety of courses, including a yearlong graduate course on regression and ANOVA or a data analysis course for upper-division statistics students and graduate students from other fields. It places a strong emphasis on interpreting the range of computer output encountered when dealing with unbalanced data.
A First Step toward a Unified Theory of Richly Parameterized Linear Models Using mixed linear models to analyze data often leads to results that are mysterious, inconvenient, or wrong. Further compounding the problem, statisticians lack a cohesive resource to acquire a systematic, theory-based understanding of models with random effects. Richly Parameterized Linear Models: Additive, Time Series, and Spatial Models Using Random Effects takes a first step in developing a full theory of richly parameterized models, which would allow statisticians to better understand their analysis results. The author examines what is known and unknown about mixed linear models and identifies research opportunities. The first two parts of the book cover an existing syntax for unifying models with random effects. The text explains how richly parameterized models can be expressed as mixed linear models and analyzed using conventional and Bayesian methods. In the last two parts, the author discusses oddities that can arise when analyzing data using these models. He presents ways to detect problems and, when possible, shows how to mitigate or avoid them. The book adapts ideas from linear model theory and then goes beyond that theory by examining the information in the data about the mixed linear model's covariance matrices. Each chapter ends with two sets of exercises. Conventional problems encourage readers to practice with the algebraic methods and open questions motivate readers to research further. Supporting materials, including datasets for most of the examples analyzed, are available on the author's website.
A First Course in Ergodic Theory provides readers with an introductory course in Ergodic Theory. This textbook has been developed from the authors' own notes on the subject, which they have been teaching since the 1990s. Over the years they have added topics, theorems, examples and explanations from various sources. The result is a book that is easy to teach from and easy to learn from - designed to require only minimal prerequisites. Features Suitable for readers with only a basic knowledge of measure theory, some topology and a very basic knowledge of functional analysis Perfect as the primary textbook for a course in Ergodic Theory Examples are described and are studied in detail when new properties are presented.
Are you buying a car or smartphone or dishwasher? We bet long-term, trouble-free operation (i.e., high reliability) is among the top three things you look for. Reliability problems can lead to everything from minor inconveniences to human disasters. Ensuring high reliability in designing and building manufactured products is principally an engineering challenge-but statistics plays a key role. Achieving Product Reliability explains in a non-technical manner how statistics is used in modern product reliability assurance. Features: Describes applications of statistics in reliability assurance in design, development, validation, manufacturing, and field tracking. Uses real-life examples to illustrate key statistical concepts such as the Weibull and lognormal distributions, hazard rate, and censored data. Demonstrates the use of graphical tools in such areas as accelerated testing, degradation data modeling, and repairable systems data analysis. Presents opportunities for profitably applying statistics in the era of Big Data and Industrial Internet of Things (IIoT) utilizing, for example, the instantaneous transmission of large quantities of field data. Whether you are an intellectually curious citizen, student, manager, budding reliability professional, or academician seeking practical applications, Achieving Product Reliability is a great starting point for a big-picture view of statistics in reliability assurance. The authors are world-renowned experts on this topic with extensive experience as company-wide statistical resources for a global conglomerate, consultants to business and government, and researchers of statistical methods for reliability applications.
Mendelian Randomization: Methods For Causal Inference Using Genetic Variants provides thorough coverage of the methods and practical elements of Mendelian randomization analysis. It brings together diverse aspects of Mendelian randomization from the fields of epidemiology, statistics, genetics, and bioinformatics. Through multiple examples, the first part of the book introduces the reader to the concept of Mendelian randomization, showing how to perform simple Mendelian randomization investigations and interpret the results. The second part of the book addresses specific methodological issues relevant to the practice of Mendelian randomization, including robust methods, weak instruments, multivariable methods, and power calculations. The authors present the theoretical aspects of these issues in an easy-to-understand way by using non-technical language. The last part of the book examines the potential for Mendelian randomization in the future, exploring both methodological and applied developments. Features Offers first-hand, in-depth guidance on Mendelian randomization from leaders in the field Makes the diverse aspects of Mendelian randomization understandable to newcomers Illustrates technical details using data from applied analyses Discusses possible future directions for research involving Mendelian randomization Software code is provided in the relevant chapters and is also available at the supplementary website This book gives epidemiologists, statisticians, geneticists, and bioinformaticians the foundation to understand how to use genetic variants as instrumental variables in observational data. New in Second Edition: The second edition of the book has been substantially re-written to reduce the amount of technical content, and emphasize practical consequences of theoretical issues. Extensive material on the use of two-sample Mendelian randomization and publicly-available summarized data has been added. The book now includes several real-world examples that show how Mendelian randomization can be used to address questions of disease aetiology, target validation, and drug development
Signal Detection for Medical Scientists: Likelihood Ratio Based Test-Based Methodology presents the data mining techniques with focus on likelihood ratio test (LRT) based methods for signal detection. It emphasizes computational aspect of LRT methodology and is pertinent for first-time researchers and graduate students venturing into this interesting field. The book is written as a reference book for professionals in pharmaceutical industry, manufactures of medical devices, and regulatory agencies. The book deals with the signal detection in drug/device evaluation, which is important in the post-market evaluation of medical products, and in the pre-market signal detection during clinical trials for monitoring procedures. It should also appeal to academic researchers, and faculty members in mathematics, statistics, biostatistics, data science, pharmacology, engineering, epidemiology, and public health. Therefore, this book is well suited for both research and teaching. Key Features: Includes a balanced discussion of art of data structure, issues in signal detection, statistical methods and analytics, and implementation of the methods. Provides a comprehensive summary of the LRT methods for signal detection including the basic theory and extensions for varying datasets that may be large post-market data or pre-market clinical trial data. Contains details of scientific background, statistical methods, and associated algorithms that a reader can quickly master the materials and apply methods in the book on one's own problems
Time Series: A First Course with Bootstrap Starter provides an introductory course on time series analysis that satisfies the triptych of (i) mathematical completeness, (ii) computational illustration and implementation, and (iii) conciseness and accessibility to upper-level undergraduate and M.S. students. Basic theoretical results are presented in a mathematically convincing way, and the methods of data analysis are developed through examples and exercises parsed in R. A student with a basic course in mathematical statistics will learn both how to analyze time series and how to interpret the results. The book provides the foundation of time series methods, including linear filters and a geometric approach to prediction. The important paradigm of ARMA models is studied in-depth, as well as frequency domain methods. Entropy and other information theoretic notions are introduced, with applications to time series modeling. The second half of the book focuses on statistical inference, the fitting of time series models, as well as computational facets of forecasting. Many time series of interest are nonlinear in which case classical inference methods can fail, but bootstrap methods may come to the rescue. Distinctive features of the book are the emphasis on geometric notions and the frequency domain, the discussion of entropy maximization, and a thorough treatment of recent computer-intensive methods for time series such as subsampling and the bootstrap. There are more than 600 exercises, half of which involve R coding and/or data analysis. Supplements include a website with 12 key data sets and all R code for the book's examples, as well as the solutions to exercises.
How can major corporations and governments more quickly and accurately detect and address cyberattacks on their networks? How can local authorities improve early detection and prevention of epidemics? How can researchers improve the identification and classification of space objects in difficult (e.g., dim) settings? These questions, among others in dozens of fields, can be addressed using statistical methods of sequential hypothesis testing and changepoint detection. This book considers sequential changepoint detection for very general non-i.i.d. stochastic models, that is, when the observed data is dependent and non-identically distributed. Previous work has primarily focused on changepoint detection with simple hypotheses and single-stream data. This book extends the asymptotic theory of change detection to the case of composite hypotheses as well as for multi-stream data when the number of affected streams is unknown. These extensions are more relevant for practical applications, including in modern, complex information systems and networks. These extensions are illustrated using Markov, hidden Markov, state-space, regression, and autoregression models, and several applications, including near-Earth space informatics and cybersecurity are discussed. This book is aimed at graduate students and researchers in statistics and applied probability who are familiar with complete convergence, Markov random walks, renewal and nonlinear renewal theories, Markov renewal theory, and uniform ergodicity of Markov processes. Key features: Design and optimality properties of sequential hypothesis testing and change detection algorithms (in Bayesian, minimax, pointwise, and other settings) Consideration of very general non-i.i.d. stochastic models that include Markov, hidden Markov, state-space linear and non-linear models, regression, and autoregression models Multiple decision-making problems, including quickest change detection-identification Real-world applications to object detection and tracking, near-Earth space informatics, computer network surveillance and security, and other topics
Statistical methods that are commonly used in the review and approval process of regulatory submissions are usually referred to as statistics in regulatory science or regulatory statistics. In a broader sense, statistics in regulatory science can be defined as valid statistics that are employed in the review and approval process of regulatory submissions of pharmaceutical products. In addition, statistics in regulatory science are involved with the development of regulatory policy, guidance, and regulatory critical clinical initiatives related research. This book is devoted to the discussion of statistics in regulatory science for pharmaceutical development. It covers practical issues that are commonly encountered in regulatory science of pharmaceutical research and development including topics related to research activities, review of regulatory submissions, recent critical clinical initiatives, and policy/guidance development in regulatory science. Devoted entirely to discussing statistics in regulatory science for pharmaceutical development. Reviews critical issues (e.g., endpoint/margin selection and complex innovative design such as adaptive trial design) in the pharmaceutical development and regulatory approval process. Clarifies controversial statistical issues (e.g., hypothesis testing versus confidence interval approach, missing data/estimands, multiplicity, and Bayesian design and approach) in review/approval of regulatory submissions. Proposes innovative thinking regarding study designs and statistical methods (e.g., n-of-1 trial design, adaptive trial design, and probability monitoring procedure for sample size) for rare disease drug development. Provides insight regarding current regulatory clinical initiatives (e.g., precision/personalized medicine, biomarker-driven target clinical trials, model informed drug development, big data analytics, and real world data/evidence). This book provides key statistical concepts, innovative designs, and analysis methods that are useful in regulatory science. Also included are some practical, challenging, and controversial issues that are commonly seen in the review and approval process of regulatory submissions. About the author Shein-Chung Chow, Ph.D. is currently a Professor at Duke University School of Medicine, Durham, NC. He was previously the Associate Director at the Office of Biostatistics, Center for Drug Evaluation and Research, United States Food and Drug Administration (FDA). Dr. Chow has also held various positions in the pharmaceutical industry such as Vice President at Millennium, Cambridge, MA, Executive Director at Covance, Princeton, NJ, and Director and Department Head at Bristol-Myers Squibb, Plainsboro, NJ. He was elected Fellow of the American Statistical Association and an elected member of the ISI (International Statistical Institute). Dr. Chow is Editor-in-Chief of the Journal of Biopharmaceutical Statistics and Biostatistics Book Series, Chapman and Hall/CRC Press, Taylor & Francis, New York. Dr. Chow is the author or co-author of over 300 methodology papers and 30 books. |
![]() ![]() You may like...
Introductory Statistics Achieve access…
Stephen Kokoska
Mixed media product
R2,433
Discovery Miles 24 330
Time Series Analysis - With Applications…
Jonathan D. Cryer, Kung-Sik Chan
Hardcover
R2,849
Discovery Miles 28 490
The Practice of Statistics for Business…
David S Moore, George P. McCabe, …
Mixed media product
R2,433
Discovery Miles 24 330
|