![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Probability & statistics
This book covers topics of Informational Geometry, a field which deals with the differential geometric study of the manifold probability density functions. This is a field that is increasingly attracting the interest of researchers from many different areas of science, including mathematics, statistics, geometry, computer science, signal processing, physics and neuroscience. It is the authors' hope that the present book will be a valuable reference for researchers and graduate students in one of the aforementioned fields. This textbook is a unified presentation of differential geometry and probability theory, and constitutes a text for a course directed at graduate or advanced undergraduate students interested in applications of differential geometry in probability and statistics. The book contains over 100 proposed exercises meant to help students deepen their understanding, and it is accompanied by software that is able to provide numerical computations of several information geometric objects. The reader will understand a flourishing field of mathematics in which very few books have been written so far.
Machine learning is concerned with the analysis of large data and multiple variables. However, it is also often more sensitive than traditional statistical methods to analyze small data. The first volume reviewed subjects like optimal scaling, neural networks, factor analysis, partial least squares, discriminant analysis, canonical analysis, and fuzzy modeling. This second volume includes various clustering models, support vector machines, Bayesian networks, discrete wavelet analysis, genetic programming, association rule learning, anomaly detection, correspondence analysis, and other subjects. Both the theoretical bases and the step by step analyses are described for the benefit of non-mathematical readers. Each chapter can be studied without the need to consult other chapters. Traditional statistical tests are, sometimes, priors to machine learning methods, and they are also, sometimes, used as contrast tests. To those wishing to obtain more knowledge of them, we recommend to additionally study (1) Statistics Applied to Clinical Studies 5th Edition 2012, (2) SPSS for Starters Part One and Two 2012, and (3) Statistical Analysis of Clinical Data on a Pocket Calculator Part One and Two 2012, written by the same authors, and edited by Springer, New York.
This book provides a comprehensive overview of the theory and praxis of Big Data Analytics and how these are used to extract cognition-related information from social media and literary texts. It presents analytics that transcends the borders of discipline-specific academic research and focuses on knowledge extraction, prediction, and decision-making in the context of individual, social, and national development. The content is divided into three main sections: the first of which discusses various approaches associated with Big Data Analytics, while the second addresses the security and privacy of big data in social media, and the last focuses on the literary text as the literary data in Big Data Analytics. Sharing valuable insights into the etiology behind human cognition and its reflection in social media and literary texts, the book benefits all those interested in analytics that can be applied to literature, history, philosophy, linguistics, literary theory, media & communication studies and computational/digital humanities.
New Perspectives in Partial Least Squares and Related Methods shares original, peer-reviewed research from presentations during the 2012 partial least squares methods meeting (PLS 2012). This was the 7th meeting in the series of PLS conferences and the first to take place in the USA. PLS is an abbreviation for Partial Least Squares and is also sometimes expanded as projection to latent structures. This is an approach for modeling relations between data matrices of different types of variables measured on the same set of objects. The twenty-two papers in this volume, which include three invited contributions from our keynote speakers, provide a comprehensive overview of the current state of the most advanced research related to PLS and related methods. Prominent scientists from around the world took part in PLS 2012 and their contributions covered the multiple dimensions of the partial least squares-based methods. These exciting theoretical developments ranged from partial least squares regression and correlation, component based path modeling to regularized regression and subspace visualization. In following the tradition of the six previous PLS meetings, these contributions also included a large variety of PLS approaches such as PLS metamodels, variable selection, sparse PLS regression, distance based PLS, significance vs. reliability, and non-linear PLS. Finally, these contributions applied PLS methods to data originating from the traditional econometric/economic data to genomics data, brain images, information systems, epidemiology, and chemical spectroscopy. Such a broad and comprehensive volume will also encourage new uses of PLS models in work by researchers and students in many fields.
This book provides a groundbreaking introduction to the likelihood inference for correlated survival data via the hierarchical (or h-) likelihood in order to obtain the (marginal) likelihood and to address the computational difficulties in inferences and extensions. The approach presented in the book overcomes shortcomings in the traditional likelihood-based methods for clustered survival data such as intractable integration. The text includes technical materials such as derivations and proofs in each chapter, as well as recently developed software programs in R ("frailtyHL"), while the real-world data examples together with an R package, "frailtyHL" in CRAN, provide readers with useful hands-on tools. Reviewing new developments since the introduction of the h-likelihood to survival analysis (methods for interval estimation of the individual frailty and for variable selection of the fixed effects in the general class of frailty models) and guiding future directions, the book is of interest to researchers in medical and genetics fields, graduate students, and PhD (bio) statisticians.
This book presents the breadth and diversity of empirical and practical work done on statistics education around the world. A wide range of methods are used to respond to the research questions that form it's base. Case studies of single students or teachers aimed at understanding reasoning processes, large-scale experimental studies attempting to generalize trends in the teaching and learning of statistics are both employed. Various epistemological stances are described and utilized. The teaching and learning of statistics is presented in multiple contexts in the book. These include designed settings for young children, students in formal schooling, tertiary level students, vocational schools, and teacher professional development. A diversity is evident also in the choices of what to teach (curriculum), when to teach (learning trajectory), how to teach (pedagogy), how to demonstrate evidence of learning (assessment) and what challenges teachers and students face when they solve statistical problems (reasoning and thinking).
This monograph will provide an in-depth mathematical treatment of modern multiple test procedures controlling the false discovery rate (FDR) and related error measures, particularly addressing applications to fields such as genetics, proteomics, neuroscience and general biology. The book will also include a detailed description how to implement these methods in practice. Moreover new developments focusing on non-standard assumptions are also included, especially multiple tests for discrete data. The book primarily addresses researchers and practitioners but will also be beneficial for graduate students.
This book presents selected papers on statistical model development related mainly to the fields of Biostatistics and Bioinformatics. The coverage of the material falls squarely into the following categories: (a) Survival analysis and multivariate survival analysis, (b) Time series and longitudinal data analysis, (c) Statistical model development and (d) Applied statistical modelling. Innovations in statistical modelling are presented throughout each of the four areas, with some intriguing new ideas on hierarchical generalized non-linear models and on frailty models with structural dispersion, just to mention two examples. The contributors include distinguished international statisticians such as Philip Hougaard, John Hinde, Il Do Ha, Roger Payne and Alessandra Durio, among others, as well as promising newcomers. Some of the contributions have come from researchers working in the BIO-SI research programme on Biostatistics and Bioinformatics, centred on the Universities of Limerick and Galway in Ireland and funded by the Science Foundation Ireland under its Mathematics Initiative.
The contributors to this volume include many of the distinguished researchers in this area. Many of these scholars have collaborated with Joseph McKean to develop underlying theory for these methods, obtain small sample corrections, and develop efficient algorithms for their computation. The papers cover the scope of the area, including robust nonparametric rank-based procedures through Bayesian and big data rank-based analyses. Areas of application include biostatistics and spatial areas. Over the last 30 years, robust rank-based and nonparametric methods have developed considerably. These procedures generalize traditional Wilcoxon-type methods for one- and two-sample location problems. Research into these procedures has culminated in complete analyses for many of the models used in practice including linear, generalized linear, mixed, and nonlinear models. Settings are both multivariate and univariate. With the development of R packages in these areas, computation of these procedures is easily shared with readers and implemented. This book is developed from the International Conference on Robust Rank-Based and Nonparametric Methods, held at Western Michigan University in April 2015.
Monte Carlo simulation is one of the best tools for performing realistic analysis of complex systems as it allows most of the limiting assumptions on system behavior to be relaxed. The Monte Carlo Simulation Method for System Reliability and Risk Analysis comprehensively illustrates the Monte Carlo simulation method and its application to reliability and system engineering. Readers are given a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling. Whilst many of the topics rely on a high-level understanding of calculus, probability and statistics, simple academic examples will be provided in support to the explanation of the theoretical foundations to facilitate comprehension of the subject matter. Case studies will be introduced to provide the practical value of the most advanced techniques. This detailed approach makes The Monte Carlo Simulation Method for System Reliability and Risk Analysis a key reference for senior undergraduate and graduate students as well as researchers and practitioners. It provides a powerful tool for all those involved in system analysis for reliability, maintenance and risk evaluations.
In this, his most famous work, Pierre-Simon, Marquis de Laplace lays out a system for reasoning based on probability. The single most famous piece introduced in this work is the rule of succession, which calculates the probability that a trial will be a success based on the number of times it has succeeded in the past. Students of mathematics will find A Philosophical Essay on Probabilities an essential read for understanding this complex field of study and applying its truths to their lives. French mathematician PIERRE-SIMON, MARQUIS DE LAPLACE (1749-1827) was essential in the formation of mathematical physics. He spent much of his life working on mathematical astronomy and even suggested the existence of black holes. Laplace is also known for his work on probability.
Biological and other natural processes have always been a source of inspiration for computer science and information technology. Many emerging problem solving techniques integrate advanced evolution and cooperation strategies, encompassing a range of spatio-temporal scales for visionary conceptualization of evolutionary computation. The previous editions of NICSO were held in Granada, Spain (2006), Acireale, Italy (2007), Tenerife, Spain (2008), and again in Granada in 2010. NICSO evolved to be one of the most interesting and profiled workshops in nature inspired computing. NICSO 2011 has offered an inspiring environment for debating the state of the art ideas and techniques in nature inspired cooperative strategies and a comprehensive image on recent applications of these ideas and techniques. The topics covered by this volume include Swarm Intelligence (such as Ant and Bee Colony Optimization), Genetic Algorithms, Multiagent Systems, Coevolution and Cooperation strategies, Adversarial Models, Synergic Building Blocks, Complex Networks, Social Impact Models, Evolutionary Design, Self Organized Criticality, Evolving Systems, Cellular Automata, Hybrid Algorithms, and Membrane Computing (P-Systems).
The purpose of this textbook is to bring together, in a self-contained introductory form, the scattered material in the field of stochastic processes and statistical physics. It offers the opportunity of being acquainted with stochastic, kinetic and nonequilibrium processes. Although the research techniques in these areas have become standard procedures, they are not usually taught in the normal courses on statistical physics. For students of physics in their last year and graduate students who wish to gain an invaluable introduction on the above subjects, this book is a necessary tool.
Inference infinite sampling is a new development that is essential for the field of sampling. In addition to covering the majority of well known sampling plans and procedures, this study covers the important topics of superpopulation approach, randomized response, non-response and resampling techniques. The authors also provide extensive sets of problems ranging in difficulty, making this book beneficial to students.
This book provides readers with a greater understanding of a variety of statistical techniques along with the procedure to use the most popular statistical software package SPSS. It strengthens the intuitive understanding of the material, thereby increasing the ability to successfully analyze data in the future. The book provides more control in the analysis of data so that readers can apply the techniques to a broader spectrum of research problems. This book focuses on providing readers with the knowledge and skills needed to carry out research in management, humanities, social and behavioural sciences by using SPSS.
Very little has been published on optimization of pharmaceutical portfolios. Moreover, most of published literature is coming from the commercial side, where probability of technical success (PoS) is treated as fixed, and not as a consequence of development strategy or design. In this book there is a strong focus on impact of study design on PoS and ultimately on the value of portfolio. Design options that are discussed in different chapters are dose-selection strategies, adaptive design and enrichment. Some development strategies that are discussed are indication sequencing, optimal number of programs and optimal decision criteria. This book includes chapters written by authors with very broad backgrounds including financial, clinical, statistical, decision sciences, commercial and regulatory. Many authors have long held executive positions and have been involved with decision making at a product or at a portfolio level. As such, it is expected that this book will attract a very broad audience, including decision makers in pharmaceutical R&D, commercial and financial departments. The intended audience also includes portfolio planners and managers, statisticians, decision scientists and clinicians. Early chapters describe approaches to portfolio optimization from big Pharma and Venture Capital standpoints. They have stronger focus on finances and processes. Later chapters present selected statistical and decision analysis methods for optimizing drug development programs and portfolios. Some methodological chapters are technical; however, with a few exceptions they require a relatively basic knowledge of statistics by a reader.
In this book, an integrated introduction to the statistical inference is provided from a frequentist likelihood-based viewpoint. Classical results are presented together with recent developments largely built upon ideas due to R.A. Fisher. After a unified review of background material (statistical methods, likelihood, data reductions, first-order asymptotics) and inference in the presence of nuisance parameters (including pseufo-likelihoods), a self-contained introduction is given to exponential families, exponential dispersion models, generalized linear models, and group families. Finally, basic results of higher-order asymptotics are introduced (index notation, asymptotic expansions for statistics and distributions, and major applications to likelihood inference). The emphasis is more on general concepts and methods than on regularity conditions. Many examples are given for specific statistical models. Each chapter is supplemented with exercises, problems and bibliographic notes. This volume can serve as a textbook in intermediate-level undergraduate courses.
This research monograph gives a detailed account of a theory which is mainly concerned with certain classes of degenerate differential operators, Markov semigroups and approximation processes. These mathematical objects are generated by arbitrary Markov operators acting on spaces of continuous functions defined on compact convex sets; the study of the interrelations between them constitutes one of the distinguishing features of the book. Among other things, this theory provides useful tools for studying large classes of initial-boundary value evolution problems, the main aim being to obtain a constructive approximation to the associated positive C0-semigroups by means of iterates of suitable positive approximating operators. As a consequence, a qualitative analysis of the solutions to the evolution problems can be efficiently developed. The book is mainly addressed to research mathematicians interested in modern approximation theory by positive linear operators and/or in the theory of positive C0-semigroups of operators and evolution equations. It could also serve as a textbook for a graduate level course.
Biostatistics is the branch of statistics that deals with data relating to living organisms. This manual is a comprehensive guide to biostatistics for medical students. Beginning with an overview of bioethics in clinical research, an introduction to statistics, and discussion on research methodology, the following sections cover different statistical tests, data interpretation, probability, and other statistical concepts such as demographics and life tables. The final section explains report writing and applying for research grants and a chapter on 'measurement and error analysis' focuses on research papers and clinical trials. Key Points Comprehensive guide to biostatistics for medical students Covers research methodology, statistical tests, data interpretation, probability and more Includes other statistical concepts such as demographics and life tables Explains report writing and grant application in depth
For the first two editions of the book Probability (GTM 95), each chapter included a comprehensive and diverse set of relevant exercises. While the work on the third edition was still in progress, it was decided that it would be more appropriate to publish a separate book that would comprise all of the exercises from previous editions, in addition tomany new exercises. Most of the material in this book consists of exercises created by Shiryaev, collected and compiled over the course of many years while working on many interesting topics.Many of the exercises resulted from discussions that took place during special seminars for graduate and undergraduate students. Many of the exercises included in the book contain helpful hints and other relevant information. Lastly, the author has included an appendix at the end of the book that contains a summary of the main results, notation and terminology from Probability Theory that are used throughout the present book. This Appendix also contains additional material from Combinatorics, Potential Theory and Markov Chains, which is not covered in the book, but is nevertheless needed for many of the exercises included here."
This book provides a generalised approach to fractal dimension theory from the standpoint of asymmetric topology by employing the concept of a fractal structure. The fractal dimension is the main invariant of a fractal set, and provides useful information regarding the irregularities it presents when examined at a suitable level of detail. New theoretical models for calculating the fractal dimension of any subset with respect to a fractal structure are posed to generalise both the Hausdorff and box-counting dimensions. Some specific results for self-similar sets are also proved. Unlike classical fractal dimensions, these new models can be used with empirical applications of fractal dimension including non-Euclidean contexts. In addition, the book applies these fractal dimensions to explore long-memory in financial markets. In particular, novel results linking both fractal dimension and the Hurst exponent are provided. As such, the book provides a number of algorithms for properly calculating the self-similarity exponent of a wide range of processes, including (fractional) Brownian motion and Levy stable processes. The algorithms also make it possible to analyse long-memory in real stocks and international indexes. This book is addressed to those researchers interested in fractal geometry, self-similarity patterns, and computational applications involving fractal dimension and Hurst exponent.
This book introduces basic computing skills designed for industry professionals without a strong computer science background. Written in an easily accessible manner, and accompanied by a user-friendly website, it serves as a self-study guide to survey data science and data engineering for those who aspire to start a computing career, or expand on their current roles, in areas such as applied statistics, big data, machine learning, data mining, and informatics. The authors draw from their combined experience working at software and social network companies, on big data products at several major online retailers, as well as their experience building big data systems for an AI startup. Spanning from the basic inner workings of a computer to advanced data manipulation techniques, this book opens doors for readers to quickly explore and enhance their computing knowledge. Computing with Data comprises a wide range of computational topics essential for data scientists, analysts, and engineers, providing them with the necessary tools to be successful in any role that involves computing with data. The introduction is self-contained, and chapters progress from basic hardware concepts to operating systems, programming languages, graphing and processing data, testing and programming tools, big data frameworks, and cloud computing. The book is fashioned with several audiences in mind. Readers without a strong educational background in CS--or those who need a refresher--will find the chapters on hardware, operating systems, and programming languages particularly useful. Readers with a strong educational background in CS, but without significant industry background, will find the following chapters especially beneficial: learning R, testing, programming, visualizing and processing data in Python and R, system design for big data, data stores, and software craftsmanship.
This book gives a unifying framework for estimating the abundance of open populations: populations subject to births, deaths and movement, given imperfect measurements or samples of the populations. The focus is primarily on populations of vertebrates for which dynamics are typically modelled within the framework of an annual cycle, and for which stochastic variability in the demographic processes is usually modest. Discrete-time models are developed in which animals can be assigned to discrete states such as age class, gender, maturity, population (within a metapopulation), or species (for multi-species models). The book goes well beyond estimation of abundance, allowing inference on underlying population processes such as birth or recruitment, survival and movement. This requires the formulation and fitting of population dynamics models. The resulting fitted models yield both estimates of abundance and estimates of parameters characterizing the underlying processes.
Epidemiologic Studies in Cancer Prevention and Screening is the first comprehensive overview of the evidence base for both cancer prevention and screening. This book is directed to the many professionals in government, academia, public health and health care who need up to date information on the potential for reducing the impact of cancer, including physicians, nurses, epidemiologists, and research scientists. The main aim of the book is to provide a realistic appraisal of the evidence for both cancer prevention and cancer screening. In addition, the book provides an accounting of the extent programs based on available knowledge have impacted populations. It does this through: 1. Presentation of a rigorous and realistic evaluation of the evidence for population-based interventions in prevention of and screening for cancer, with particular relevance to those believed to be applicable now, or on the cusp of application 2. Evaluation of the relative contributions of prevention and screening 3. Discussion of how, within the health systems with which the authors are familiar, prevention and screening for cancer can be enhanced. Overview of the evidence base for cancer prevention and screening, as demonstrated in Epidemiologic Studies in Cancer Prevention and Screening, is critically important given current debates within the scientific community. Of the five components of cancer control, prevention, early detection (including screening) treatment, rehabilitation and palliative care, prevention is regarded as the most important. Yet the knowledge available to prevent many cancers is incomplete, and even if we know the main causal factors for a cancer, we often lack the understanding to put this knowledge into effect. Further, with the long natural history of most cancers, it could take many years to make an appreciable impact upon the incidence of the cancer. Because of these facts, many have come to believe that screening has the most potential for reduction of the burden of cancer. Yet, through trying to apply the knowledge gained on screening for cancer, the scientific community has recognized that screening can have major disadvantages and achieve little at substantial cost. This reduces the resources that are potentially available both for prevention and for treatment. |
![]() ![]() You may like...
Information Systems and Technology for…
Tomayess Issa, Pedro Isaias, …
Hardcover
R5,362
Discovery Miles 53 620
Family Business Case Studies Across the…
Jeremy Cheng, Luis Diaz-Matajira, …
Hardcover
R3,095
Discovery Miles 30 950
External Events and Crises that Impact…
Heather C. Webb, Hussain Al Numairy
Hardcover
R6,220
Discovery Miles 62 200
Focus On Management Principles - A…
Andreas de Beer, Dirk Rossouw
Paperback
![]() R370 Discovery Miles 3 700
Employee Engagement In A South African…
Hester Nienaber, Nico Martins
Paperback
R435
Discovery Miles 4 350
Organizational Behavior, Global Edition
Stephen Robbins, Timothy Judge
Paperback
R2,735
Discovery Miles 27 350
|