![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Probability & statistics
This monograph will provide an in-depth mathematical treatment of modern multiple test procedures controlling the false discovery rate (FDR) and related error measures, particularly addressing applications to fields such as genetics, proteomics, neuroscience and general biology. The book will also include a detailed description how to implement these methods in practice. Moreover new developments focusing on non-standard assumptions are also included, especially multiple tests for discrete data. The book primarily addresses researchers and practitioners but will also be beneficial for graduate students.
This book presents selected papers on statistical model development related mainly to the fields of Biostatistics and Bioinformatics. The coverage of the material falls squarely into the following categories: (a) Survival analysis and multivariate survival analysis, (b) Time series and longitudinal data analysis, (c) Statistical model development and (d) Applied statistical modelling. Innovations in statistical modelling are presented throughout each of the four areas, with some intriguing new ideas on hierarchical generalized non-linear models and on frailty models with structural dispersion, just to mention two examples. The contributors include distinguished international statisticians such as Philip Hougaard, John Hinde, Il Do Ha, Roger Payne and Alessandra Durio, among others, as well as promising newcomers. Some of the contributions have come from researchers working in the BIO-SI research programme on Biostatistics and Bioinformatics, centred on the Universities of Limerick and Galway in Ireland and funded by the Science Foundation Ireland under its Mathematics Initiative.
The contributors to this volume include many of the distinguished researchers in this area. Many of these scholars have collaborated with Joseph McKean to develop underlying theory for these methods, obtain small sample corrections, and develop efficient algorithms for their computation. The papers cover the scope of the area, including robust nonparametric rank-based procedures through Bayesian and big data rank-based analyses. Areas of application include biostatistics and spatial areas. Over the last 30 years, robust rank-based and nonparametric methods have developed considerably. These procedures generalize traditional Wilcoxon-type methods for one- and two-sample location problems. Research into these procedures has culminated in complete analyses for many of the models used in practice including linear, generalized linear, mixed, and nonlinear models. Settings are both multivariate and univariate. With the development of R packages in these areas, computation of these procedures is easily shared with readers and implemented. This book is developed from the International Conference on Robust Rank-Based and Nonparametric Methods, held at Western Michigan University in April 2015.
Monte Carlo simulation is one of the best tools for performing realistic analysis of complex systems as it allows most of the limiting assumptions on system behavior to be relaxed. The Monte Carlo Simulation Method for System Reliability and Risk Analysis comprehensively illustrates the Monte Carlo simulation method and its application to reliability and system engineering. Readers are given a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling. Whilst many of the topics rely on a high-level understanding of calculus, probability and statistics, simple academic examples will be provided in support to the explanation of the theoretical foundations to facilitate comprehension of the subject matter. Case studies will be introduced to provide the practical value of the most advanced techniques. This detailed approach makes The Monte Carlo Simulation Method for System Reliability and Risk Analysis a key reference for senior undergraduate and graduate students as well as researchers and practitioners. It provides a powerful tool for all those involved in system analysis for reliability, maintenance and risk evaluations.
In this, his most famous work, Pierre-Simon, Marquis de Laplace lays out a system for reasoning based on probability. The single most famous piece introduced in this work is the rule of succession, which calculates the probability that a trial will be a success based on the number of times it has succeeded in the past. Students of mathematics will find A Philosophical Essay on Probabilities an essential read for understanding this complex field of study and applying its truths to their lives. French mathematician PIERRE-SIMON, MARQUIS DE LAPLACE (1749-1827) was essential in the formation of mathematical physics. He spent much of his life working on mathematical astronomy and even suggested the existence of black holes. Laplace is also known for his work on probability.
Biological and other natural processes have always been a source of inspiration for computer science and information technology. Many emerging problem solving techniques integrate advanced evolution and cooperation strategies, encompassing a range of spatio-temporal scales for visionary conceptualization of evolutionary computation. The previous editions of NICSO were held in Granada, Spain (2006), Acireale, Italy (2007), Tenerife, Spain (2008), and again in Granada in 2010. NICSO evolved to be one of the most interesting and profiled workshops in nature inspired computing. NICSO 2011 has offered an inspiring environment for debating the state of the art ideas and techniques in nature inspired cooperative strategies and a comprehensive image on recent applications of these ideas and techniques. The topics covered by this volume include Swarm Intelligence (such as Ant and Bee Colony Optimization), Genetic Algorithms, Multiagent Systems, Coevolution and Cooperation strategies, Adversarial Models, Synergic Building Blocks, Complex Networks, Social Impact Models, Evolutionary Design, Self Organized Criticality, Evolving Systems, Cellular Automata, Hybrid Algorithms, and Membrane Computing (P-Systems).
The purpose of this textbook is to bring together, in a self-contained introductory form, the scattered material in the field of stochastic processes and statistical physics. It offers the opportunity of being acquainted with stochastic, kinetic and nonequilibrium processes. Although the research techniques in these areas have become standard procedures, they are not usually taught in the normal courses on statistical physics. For students of physics in their last year and graduate students who wish to gain an invaluable introduction on the above subjects, this book is a necessary tool.
This book introduces basic computing skills designed for industry professionals without a strong computer science background. Written in an easily accessible manner, and accompanied by a user-friendly website, it serves as a self-study guide to survey data science and data engineering for those who aspire to start a computing career, or expand on their current roles, in areas such as applied statistics, big data, machine learning, data mining, and informatics. The authors draw from their combined experience working at software and social network companies, on big data products at several major online retailers, as well as their experience building big data systems for an AI startup. Spanning from the basic inner workings of a computer to advanced data manipulation techniques, this book opens doors for readers to quickly explore and enhance their computing knowledge. Computing with Data comprises a wide range of computational topics essential for data scientists, analysts, and engineers, providing them with the necessary tools to be successful in any role that involves computing with data. The introduction is self-contained, and chapters progress from basic hardware concepts to operating systems, programming languages, graphing and processing data, testing and programming tools, big data frameworks, and cloud computing. The book is fashioned with several audiences in mind. Readers without a strong educational background in CS--or those who need a refresher--will find the chapters on hardware, operating systems, and programming languages particularly useful. Readers with a strong educational background in CS, but without significant industry background, will find the following chapters especially beneficial: learning R, testing, programming, visualizing and processing data in Python and R, system design for big data, data stores, and software craftsmanship.
This book provides readers with a greater understanding of a variety of statistical techniques along with the procedure to use the most popular statistical software package SPSS. It strengthens the intuitive understanding of the material, thereby increasing the ability to successfully analyze data in the future. The book provides more control in the analysis of data so that readers can apply the techniques to a broader spectrum of research problems. This book focuses on providing readers with the knowledge and skills needed to carry out research in management, humanities, social and behavioural sciences by using SPSS.
Very little has been published on optimization of pharmaceutical portfolios. Moreover, most of published literature is coming from the commercial side, where probability of technical success (PoS) is treated as fixed, and not as a consequence of development strategy or design. In this book there is a strong focus on impact of study design on PoS and ultimately on the value of portfolio. Design options that are discussed in different chapters are dose-selection strategies, adaptive design and enrichment. Some development strategies that are discussed are indication sequencing, optimal number of programs and optimal decision criteria. This book includes chapters written by authors with very broad backgrounds including financial, clinical, statistical, decision sciences, commercial and regulatory. Many authors have long held executive positions and have been involved with decision making at a product or at a portfolio level. As such, it is expected that this book will attract a very broad audience, including decision makers in pharmaceutical R&D, commercial and financial departments. The intended audience also includes portfolio planners and managers, statisticians, decision scientists and clinicians. Early chapters describe approaches to portfolio optimization from big Pharma and Venture Capital standpoints. They have stronger focus on finances and processes. Later chapters present selected statistical and decision analysis methods for optimizing drug development programs and portfolios. Some methodological chapters are technical; however, with a few exceptions they require a relatively basic knowledge of statistics by a reader.
In this book, an integrated introduction to the statistical inference is provided from a frequentist likelihood-based viewpoint. Classical results are presented together with recent developments largely built upon ideas due to R.A. Fisher. After a unified review of background material (statistical methods, likelihood, data reductions, first-order asymptotics) and inference in the presence of nuisance parameters (including pseufo-likelihoods), a self-contained introduction is given to exponential families, exponential dispersion models, generalized linear models, and group families. Finally, basic results of higher-order asymptotics are introduced (index notation, asymptotic expansions for statistics and distributions, and major applications to likelihood inference). The emphasis is more on general concepts and methods than on regularity conditions. Many examples are given for specific statistical models. Each chapter is supplemented with exercises, problems and bibliographic notes. This volume can serve as a textbook in intermediate-level undergraduate courses.
Inference infinite sampling is a new development that is essential for the field of sampling. In addition to covering the majority of well known sampling plans and procedures, this study covers the important topics of superpopulation approach, randomized response, non-response and resampling techniques. The authors also provide extensive sets of problems ranging in difficulty, making this book beneficial to students.
This research monograph gives a detailed account of a theory which is mainly concerned with certain classes of degenerate differential operators, Markov semigroups and approximation processes. These mathematical objects are generated by arbitrary Markov operators acting on spaces of continuous functions defined on compact convex sets; the study of the interrelations between them constitutes one of the distinguishing features of the book. Among other things, this theory provides useful tools for studying large classes of initial-boundary value evolution problems, the main aim being to obtain a constructive approximation to the associated positive C0-semigroups by means of iterates of suitable positive approximating operators. As a consequence, a qualitative analysis of the solutions to the evolution problems can be efficiently developed. The book is mainly addressed to research mathematicians interested in modern approximation theory by positive linear operators and/or in the theory of positive C0-semigroups of operators and evolution equations. It could also serve as a textbook for a graduate level course.
Biostatistics is the branch of statistics that deals with data relating to living organisms. This manual is a comprehensive guide to biostatistics for medical students. Beginning with an overview of bioethics in clinical research, an introduction to statistics, and discussion on research methodology, the following sections cover different statistical tests, data interpretation, probability, and other statistical concepts such as demographics and life tables. The final section explains report writing and applying for research grants and a chapter on 'measurement and error analysis' focuses on research papers and clinical trials. Key Points Comprehensive guide to biostatistics for medical students Covers research methodology, statistical tests, data interpretation, probability and more Includes other statistical concepts such as demographics and life tables Explains report writing and grant application in depth
This book gives a unifying framework for estimating the abundance of open populations: populations subject to births, deaths and movement, given imperfect measurements or samples of the populations. The focus is primarily on populations of vertebrates for which dynamics are typically modelled within the framework of an annual cycle, and for which stochastic variability in the demographic processes is usually modest. Discrete-time models are developed in which animals can be assigned to discrete states such as age class, gender, maturity, population (within a metapopulation), or species (for multi-species models). The book goes well beyond estimation of abundance, allowing inference on underlying population processes such as birth or recruitment, survival and movement. This requires the formulation and fitting of population dynamics models. The resulting fitted models yield both estimates of abundance and estimates of parameters characterizing the underlying processes.
For the first two editions of the book Probability (GTM 95), each chapter included a comprehensive and diverse set of relevant exercises. While the work on the third edition was still in progress, it was decided that it would be more appropriate to publish a separate book that would comprise all of the exercises from previous editions, in addition tomany new exercises. Most of the material in this book consists of exercises created by Shiryaev, collected and compiled over the course of many years while working on many interesting topics.Many of the exercises resulted from discussions that took place during special seminars for graduate and undergraduate students. Many of the exercises included in the book contain helpful hints and other relevant information. Lastly, the author has included an appendix at the end of the book that contains a summary of the main results, notation and terminology from Probability Theory that are used throughout the present book. This Appendix also contains additional material from Combinatorics, Potential Theory and Markov Chains, which is not covered in the book, but is nevertheless needed for many of the exercises included here."
This book provides a generalised approach to fractal dimension theory from the standpoint of asymmetric topology by employing the concept of a fractal structure. The fractal dimension is the main invariant of a fractal set, and provides useful information regarding the irregularities it presents when examined at a suitable level of detail. New theoretical models for calculating the fractal dimension of any subset with respect to a fractal structure are posed to generalise both the Hausdorff and box-counting dimensions. Some specific results for self-similar sets are also proved. Unlike classical fractal dimensions, these new models can be used with empirical applications of fractal dimension including non-Euclidean contexts. In addition, the book applies these fractal dimensions to explore long-memory in financial markets. In particular, novel results linking both fractal dimension and the Hurst exponent are provided. As such, the book provides a number of algorithms for properly calculating the self-similarity exponent of a wide range of processes, including (fractional) Brownian motion and Levy stable processes. The algorithms also make it possible to analyse long-memory in real stocks and international indexes. This book is addressed to those researchers interested in fractal geometry, self-similarity patterns, and computational applications involving fractal dimension and Hurst exponent.
Epidemiologic Studies in Cancer Prevention and Screening is the first comprehensive overview of the evidence base for both cancer prevention and screening. This book is directed to the many professionals in government, academia, public health and health care who need up to date information on the potential for reducing the impact of cancer, including physicians, nurses, epidemiologists, and research scientists. The main aim of the book is to provide a realistic appraisal of the evidence for both cancer prevention and cancer screening. In addition, the book provides an accounting of the extent programs based on available knowledge have impacted populations. It does this through: 1. Presentation of a rigorous and realistic evaluation of the evidence for population-based interventions in prevention of and screening for cancer, with particular relevance to those believed to be applicable now, or on the cusp of application 2. Evaluation of the relative contributions of prevention and screening 3. Discussion of how, within the health systems with which the authors are familiar, prevention and screening for cancer can be enhanced. Overview of the evidence base for cancer prevention and screening, as demonstrated in Epidemiologic Studies in Cancer Prevention and Screening, is critically important given current debates within the scientific community. Of the five components of cancer control, prevention, early detection (including screening) treatment, rehabilitation and palliative care, prevention is regarded as the most important. Yet the knowledge available to prevent many cancers is incomplete, and even if we know the main causal factors for a cancer, we often lack the understanding to put this knowledge into effect. Further, with the long natural history of most cancers, it could take many years to make an appreciable impact upon the incidence of the cancer. Because of these facts, many have come to believe that screening has the most potential for reduction of the burden of cancer. Yet, through trying to apply the knowledge gained on screening for cancer, the scientific community has recognized that screening can have major disadvantages and achieve little at substantial cost. This reduces the resources that are potentially available both for prevention and for treatment.
Reliability and Safety of Complex Technical Systems and Processes offers a comprehensive approach to the analysis, identification, evaluation, prediction and optimization of complex technical systems operation, reliability and safety. Its main emphasis is on multistate systems with ageing components, changes to their structure, and their components reliability and safety parameters during the operation processes. Reliability and Safety of Complex Technical Systems and Processes presents integrated models for the reliability, availability and safety of complex non-repairable and repairable multistate technical systems, with reference to their operation processes and their practical applications to real industrial systems. The authors consider variables in different operation states, reliability and safety structures, and the reliability and safety parameters of components, as well as suggesting a cost analysis for complex technical systems. Researchers and industry practitioners will find information on a wide range of complex technical systems in Reliability and Safety of Complex Technical Systems and Processes. It may prove an easy-to-use guide to reliability and safety evaluations of real complex technical systems, both during their operation and at the design stages.
Strategy and Statistics in Clinical Trials deals with the research processes and the role of statistics in these processes. The book offers real-life case studies and provides a practical, how to guide to biomedical R&D. It describes the statistical building blocks and concepts of clinical trials and promotes effective cooperation between statisticians and important other parties. The discussion is organized around 15 chapters. After providing an overview of clinical development and statistics, the book explores questions when planning clinical trials, along with the attributes of medical products. It then explains how to set research objectives and goes on to consider statistical thinking, estimation, testing procedures, and statistical significance, explanation and prediction. The rest of the book focuses on exploratory and confirmatory clinical trials; hypothesis testing and multiplicity; elements of clinical trial design; choosing trial endpoints; and determination of sample size. This book is for all individuals engaged in clinical research who are interested in a better understanding of statistics, including professional clinical researchers, professors, physicians, and researchers in laboratory. It will also be of interest to corporate and government laboratories, clinical research nurses, members of the allied health professions, and post-doctoral and graduate students.
This book presents extensive information on structural health monitoring for suspension bridges. During the past two decades, there have been significant advances in the sensing technologies employed in long-span bridge health monitoring. However, interpretation of the massive monitoring data is still lagging behind. This book establishes a series of measurement interpretation frameworks that focus on bridge site environmental conditions, and global and local responses of suspension bridges. Using the proposed frameworks, it subsequently offers new insights into the structural behaviors of long-span suspension bridges. As a valuable resource for researchers, scientists and engineers in the field of bridge structural health monitoring, it provides essential information, methods, and practical algorithms that can facilitate in-service bridge performance assessments.
What are the current trends in housing? Is my planned project commercially viable? What should be my marketing and advertisement strategies? These are just some of the questions real estate agents, landlords and developers ask researchers to answer. But to find the answers, researchers are faced with a wide variety of methods that measure housing preferences and choices. To select and value a valid research method, one needs a well-structured overview of the methods that are used in housing preference and housing choice research. This comprehensive introduction to this field offers just such an overview. It discusses and compares numerous methods, detailing the potential limitation of each one, and it reaches beyond methodology, illustrating how thoughtful consideration of methods and techniques in research can help researchers and other professionals to deliver products and services that are more in line with residents needs."
This book treats the notion of morphisms in spatial analysis, paralleling these concepts in spatial statistics (Part I) and spatial econometrics (Part II). The principal concept is morphism (e.g., isomorphisms, homomorphisms, and allomorphisms), which is defined as a structure preserving the functional linkage between mathematical properties or operations in spatial statistics and spatial econometrics, among other disciplines. The purpose of this book is to present selected conceptions in both domains that are structurally the same, even though their labelling and the notation for their elements may differ. As the approaches presented here are applied to empirical materials in geography and economics, the book will also be of interest to scholars of regional science, quantitative geography and the geospatial sciences. It is a follow-up to the book "Non-standard Spatial Statistics and Spatial Econometrics" by the same authors, which was published by Springer in 2011.
International migration is becoming an increasingly important element of contemporary demographic dynamics and yet, due to its high volatility, it remains the most unpredictable element of population change. In Europe, population forecasting is especially difficult because good-quality data on migration are lacking. There is a clear need for reliable methods of predicting migration since population forecasts are indispensable for rational decision making in many areas, including labour markets, social security or spatial planning and organisation. In addressing these issues, this book adopts a Bayesian statistical perspective, which allows for a formal incorporation of expert judgement, while describing uncertainty in a coherent and explicit manner. No prior knowledge of Bayesian statistics is assumed. The outcomes are discussed from the point of view of forecast users (decision makers), with the aim to show the relevance and usefulness of the presented methods in practical applications. |
![]() ![]() You may like...
Recent Advances in Laser Ablation ICP-MS…
Laure Dussubieux, Mark Golitko, …
Hardcover
R5,200
Discovery Miles 52 000
The Roman Historical Tradition - Regal…
James H. Richardson, Federico Santangelo
Hardcover
R4,276
Discovery Miles 42 760
Advanced Topics in Bisimulation and…
Davide Sangiorgi, Jan Rutten
Hardcover
R3,404
Discovery Miles 34 040
Analytic Combinatorics for Multiple…
Roy Streit, Robert Blair Angle, …
Hardcover
R3,626
Discovery Miles 36 260
Algebra and Geometry with Python
Sergei Kurgalin, Sergei Borzunov
Hardcover
R2,701
Discovery Miles 27 010
|