![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Probability & statistics
For courses in Multivariate Statistics, Marketing Research, Intermediate Business Statistics, Statistics in Education, and graduate-level courses in Experimental Design and Statistics. Appropriate for experimental scientists in a variety of disciplines, this market-leading text offers a readable introduction to the statistical analysis of multivariate observations. Its primary goal is to impart the knowledge necessary to make proper interpretations and select appropriate techniques for analyzing multivariate data. Ideal for a junior/senior or graduate level course that explores the statistical methods for describing and analyzing multivariate data, the text assumes two or more statistics courses as a prerequisite.
This book reports on the latest advances in the analysis of non-stationary signals, with special emphasis on cyclostationary systems. It includes cutting-edge contributions presented at the 7th Workshop on "Cyclostationary Systems and Their Applications," which was held in Grodek nad Dunajcem, Poland, in February 2014. The book covers both the theoretical properties of cyclostationary models and processes, including estimation problems for systems exhibiting cyclostationary properties, and several applications of cyclostationary systems, including case studies on gears and bearings, and methods for implementing cyclostationary processes for damage assessment in condition-based maintenance operations. It addresses the needs of students, researchers and professionals in the broad fields of engineering, mathematics and physics, with a special focus on those studying or working with nonstationary and/or cyclostationary processes.
During the last decades, there has been an explosion in computation and information technology. This development comes with an expansion of complex observational studies and clinical trials in a variety of fields such as medicine, biology, epidemiology, sociology, and economics among many others, which involve collection of large amounts of data on subjects or organisms over time. The goal of such studies can be formulated as estimation of a finite dimensional parameter of the population distribution corresponding to the observed time-dependent process. Such estimation problems arise in survival analysis, causal inference and regression analysis. This book provides a fundamental statistical framework for the analysis of complex longitudinal data. It provides the first comprehensive description of optimal estimation techniques based on time-dependent data structures subject to informative censoring and treatment assignment in so called semiparametric models. Semiparametric models are particularly attractive since they allow the presence of large unmodeled nuisance parameters. These techniques include estimation of regression parameters in the familiar (multivariate) generalized linear regression and multiplicative intensity models. They go beyond standard statistical approaches by incorporating all the observed data to allow for informative censoring, to obtain maximal efficiency, and by developing estimators of causal effects. It can be used to teach masters and Ph.D. students in biostatistics and statistics and is suitable for researchers in statistics with a strong interest in the analysis of complex longitudinal data.
Single Subject Designs in Biomedicine draws upon the rich history of single case research within the educational and behavioral research settings and extends the application to the field of biomedicine. Biomedical illustrations are used to demonstrate the processes of designing, implementing, and evaluating a single subject design. Strengths and limitations of various methodologies are presented, along with specific clinical areas of application in which these applications would be appropriate. Statistical and visual techniques for data analysis are also discussed. The breadth and depth of information provided is suitable for medical students in research oriented courses, primary care practitioners and medical specialists seeking to apply methods of evidence practice to improve patient care, and medical researchers who are expanding their methodological expertise to include single subject designs. Increasing awareness of the utility in the single subject design could enhance treatment approach and evaluation both in biomedical research and medical care settings.
This is the third edition of this text on logistic regression methods, originally published in 1994, with its second e- tion published in 2002. As in the first two editions, each chapter contains a pres- tation of its topic in "lecture?book" format together with objectives, an outline, key formulae, practice exercises, and a test. The "lecture book" has a sequence of illust- tions, formulae, or summary statements in the left column of each page and a script (i. e. , text) in the right column. This format allows you to read the script in conjunction with the illustrations and formulae that highlight the main points, formulae, or examples being presented. This third edition has expanded the second edition by adding three new chapters and a modified computer appendix. We have also expanded our overview of mod- ing strategy guidelines in Chap. 6 to consider causal d- grams. The three new chapters are as follows: Chapter 8: Additional Modeling Strategy Issues Chapter 9: Assessing Goodness of Fit for Logistic Regression Chapter 10: Assessing Discriminatory Performance of a Binary Logistic Model: ROC Curves In adding these three chapters, we have moved Chaps. 8 through 13 from the second edition to follow the new chapters, so that these previous chapters have been ren- bered as Chaps. 11-16 in this third edition.
The contributions in this book focus on a variety of topics related to discrepancy theory, comprising Fourier techniques to analyze discrepancy, low discrepancy point sets for quasi-Monte Carlo integration, probabilistic discrepancy bounds, dispersion of point sets, pair correlation of sequences, integer points in convex bodies, discrepancy with respect to geometric shapes other than rectangular boxes, and also open problems in discrepany theory.
This edited collection discusses the emerging topics in statistical modeling for biomedical research. Leading experts in the frontiers of biostatistics and biomedical research discuss the statistical procedures, useful methods, and their novel applications in biostatistics research. Interdisciplinary in scope, the volume as a whole reflects the latest advances in statistical modeling in biomedical research, identifies impactful new directions, and seeks to drive the field forward. It also fosters the interaction of scholars in the arena, offering great opportunities to stimulate further collaborations. This book will appeal to industry data scientists and statisticians, researchers, and graduate students in biostatistics and biomedical science. It covers topics in: Next generation sequence data analysis Deep learning, precision medicine, and their applications Large scale data analysis and its applications Biomedical research and modeling Survival analysis with complex data structure and its applications.
Suitable for anyone who enjoys logic puzzles Could be used as a companion book for a course on mathematical proof. The puzzles feature the same issues of problem-solving and proof-writing. For anyone who enjoys logical puzzles. For anyone interested in legal reasoning. For anyone who loves the game of baseball.
This book covers the computational aspects of psychometric methods involved in developing measurement instruments and analyzing measurement data in social sciences. It covers the main topics of psychometrics such as validity, reliability, item analysis, item response theory models, and computerized adaptive testing. The computational aspects comprise the statistical theory and models, comparison of estimation methods and algorithms, as well as an implementation with practical data examples in R and also in an interactive ShinyItemAnalysis application. Key Features: Statistical models and estimation methods involved in psychometric research Includes reproducible R code and examples with real datasets Interactive implementation in ShinyItemAnalysis application The book is targeted toward a wide range of researchers in the field of educational, psychological, and health-related measurements. It is also intended for those developing measurement instruments and for those collecting and analyzing data from behavioral measurements, who are searching for a deeper understanding of underlying models and further development of their analytical skills.
A "health disparity" refers to a higher burden of illness, injury, disability, or mortality experienced by one group relative to another. These disparities may be due to many factors including age, income, race, etc. This book will focus on their estimation, ranging from classical approaches including the quantification of a disparity, to more formal modelling, to modern approaches involving more flexible computational approaches. Features: * Presents an overview of methods and applications of health disparity estimation * First book to synthesize research in this field in a unified statistical framework * Covers classical approaches, and builds to more modern computational techniques * Includes many worked examples and case studies using real data * Discusses available software for estimation The book is designed primarily for researchers and graduate students in biostatistics, data science, and computer science. It will also be useful to many quantitative modelers in genetics, biology, sociology, and epidemiology.
The concept of ridges has appeared numerous times in the image processing liter ature. Sometimes the term is used in an intuitive sense. Other times a concrete definition is provided. In almost all cases the concept is used for very specific ap plications. When analyzing images or data sets, it is very natural for a scientist to measure critical behavior by considering maxima or minima of the data. These critical points are relatively easy to compute. Numerical packages always provide support for root finding or optimization, whether it be through bisection, Newton's method, conjugate gradient method, or other standard methods. It has not been natural for scientists to consider critical behavior in a higher-order sense. The con cept of ridge as a manifold of critical points is a natural extension of the concept of local maximum as an isolated critical point. However, almost no attention has been given to formalizing the concept. There is a need for a formal development. There is a need for understanding the computation issues that arise in the imple mentations. The purpose of this book is to address both needs by providing a formal mathematical foundation and a computational framework for ridges. The intended audience for this book includes anyone interested in exploring the use fulness of ridges in data analysis."
The theory of Markov Decision Processes - also known under several other names including sequential stochastic optimization, discrete-time stochastic control, and stochastic dynamic programming - studies sequential optimization of discrete time stochastic systems. Fundamentally, this is a methodology that examines and analyzes a discrete-time stochastic system whose transition mechanism can be controlled over time. Each control policy defines the stochastic process and values of objective functions associated with this process. Its objective is to select a "good" control policy. In real life, decisions that humans and computers make on all levels usually have two types of impacts: (i) they cost or save time, money, or other resources, or they bring revenues, as well as (ii) they have an impact on the future, by influencing the dynamics. In many situations, decisions with the largest immediate profit may not be good in view of future events. Markov Decision Processes (MDPs) model this paradigm and provide results on the structure and existence of good policies and on methods for their calculations. MDPs are attractive to many researchers because they are important both from the practical and the intellectual points of view. MDPs provide tools for the solution of important real-life problems. In particular, many business and engineering applications use MDP models. Analysis of various problems arising in MDPs leads to a large variety of interesting mathematical and computational problems. Accordingly, the Handbook of Markov Decision Processes is split into three parts: Part I deals with models with finite state and action spaces and Part II deals with infinite state problems, and Part IIIexamines specific applications. Individual chapters are written by leading experts on the subject.
Statistical methods are becoming more important in all biological fields of study. Biometry deals with the application of mathematical techniques to the quantitative study of varying characteristics of organisms, populations, species, etc. This book uses examples based on genuine data carefully chosen by the author for their special biological significance. The chapters cover a broad spectrum of topics and bridge the gap between introductory biological statistics and advanced approaches such as multivariate techniques and nonlinear models. A set of statistical tables most frequently used in biometry completes the book.
The main theme of this monograph is "comparative statistical inference. " While the topics covered have been carefully selected (they are, for example, restricted to pr- lems of statistical estimation), my aim is to provide ideas and examples which will assist a statistician, or a statistical practitioner, in comparing the performance one can expect from using either Bayesian or classical (aka, frequentist) solutions in - timation problems. Before investing the hours it will take to read this monograph, one might well want to know what sets it apart from other treatises on comparative inference. The two books that are closest to the present work are the well-known tomes by Barnett (1999) and Cox (2006). These books do indeed consider the c- ceptual and methodological differences between Bayesian and frequentist methods. What is largely absent from them, however, are answers to the question: "which - proach should one use in a given problem?" It is this latter issue that this monograph is intended to investigate. There are many books on Bayesian inference, including, for example, the widely used texts by Carlin and Louis (2008) and Gelman, Carlin, Stern and Rubin (2004). These books differ from the present work in that they begin with the premise that a Bayesian treatment is called for and then provide guidance on how a Bayesian an- ysis should be executed. Similarly, there are many books written from a classical perspective.
This book describes the properties of stochastic probabilistic models and develops the applied mathematics of stochastic point processes. It is useful to students and research workers in probability and statistics and also to research workers wishing to apply stochastic point processes.
Nevanlinna-Pick interpolation for time-varying input-output maps: The discrete case.- 0. Introduction.- 1. Preliminaries.- 2. J-Unitary operators on ?2.- 3. Time-varying Nevanlinna-Pick interpolation.- 4. Solution of the time-varying tangential Nevanlinna-Pick interpolation problem.- 5. An illustrative example.- References.- Nevanlinna-Pick interpolation for time-varying input-output maps: The continuous time case.- 0. Introduction.- 1. Generalized point evaluation.- 2. Bounded input-output maps.- 3. Residue calculus and diagonal expansion.- 4. J-unitary and J-inner operators.- 5. Time-varying Nevanlinna-Pick interpolation.- 6. An example.- References.- Dichotomy of systems and invertibility of linear ordinary differential operators.- 1. Introduction.- 2. Preliminaries.- 3. Invertibility of differential operators on the real line.- 4. Relations between operators on the full line and half line.- 5. Fredholm properties of differential operators on a half line.- 6. Fredholm properties of differential operators on a full line.- 7. Exponentially dichotomous operators.- 8. References.- Inertia theorems for block weighted shifts and applications.- 1. Introduction.- 2. One sided block weighted shifts.- 3. Dichotomies for left systems and two sided systems.- 4. Two sided block weighted shifts.- 5. Asymptotic inertia.- 6. References.- Interpolation for upper triangular operators.- 1. Introduction.- 2. Preliminaries.- 3. Colligations & characteristic functions.- 4. Towards interpolation.- 5. Explicit formulas for ?.- 6. Admissibility and more on general interpolation.- 7. Nevanlinna-Pick Interpolation.- 8. Caratheodory-Fejer interpolation.- 9. Mixed interpolation problems.- 10. Examples.- 11. Block Toeplitz & some implications.- 12. Varying coordinate spaces.- 13. References.- Minimality and realization of discrete time-varying systems.- 1. Preliminaries.- 2. Observability and reachability.- 3. Minimality for time-varying systems.- 4. Proofs of the minimality theorems.- 5. Realizations of infinite lower triangular matrices.- 6. The class of systems with constant state space dimension.- 7. Minimality and realization for periodical systems.- References.
Covering CUSUMs from an application-oriented viewpoint, while also providing the essential theoretical underpinning, this is an accessible guide for anyone with a basic statistical training. The text is aimed at quality practitioners, teachers and students of quality methodologies, and people interested in analysis of time-ordered data. Further support is available from a Web site containing CUSUM software and data sets.
High dimensional probability, in the sense that encompasses the topics rep resented in this volume, began about thirty years ago with research in two related areas: limit theorems for sums of independent Banach space valued random vectors and general Gaussian processes. An important feature in these past research studies has been the fact that they highlighted the es sential probabilistic nature of the problems considered. In part, this was because, by working on a general Banach space, one had to discard the extra, and often extraneous, structure imposed by random variables taking values in a Euclidean space, or by processes being indexed by sets in R or Rd. Doing this led to striking advances, particularly in Gaussian process theory. It also led to the creation or introduction of powerful new tools, such as randomization, decoupling, moment and exponential inequalities, chaining, isoperimetry and concentration of measure, which apply to areas well beyond those for which they were created. The general theory of em pirical processes, with its vast applications in statistics, the study of local times of Markov processes, certain problems in harmonic analysis, and the general theory of stochastic processes are just several of the broad areas in which Gaussian process techniques and techniques from probability in Banach spaces have made a substantial impact. Parallel to this work on probability in Banach spaces, classical proba bility and empirical process theory were enriched by the development of powerful results in strong approximations."
This book is concerned with important problems of robust (stable) statistical pat tern recognition when hypothetical model assumptions about experimental data are violated (disturbed). Pattern recognition theory is the field of applied mathematics in which prin ciples and methods are constructed for classification and identification of objects, phenomena, processes, situations, and signals, i. e., of objects that can be specified by a finite set of features, or properties characterizing the objects (Mathematical Encyclopedia (1984)). Two stages in development of the mathematical theory of pattern recognition may be observed. At the first stage, until the middle of the 1970s, pattern recogni tion theory was replenished mainly from adjacent mathematical disciplines: mathe matical statistics, functional analysis, discrete mathematics, and information theory. This development stage is characterized by successful solution of pattern recognition problems of different physical nature, but of the simplest form in the sense of used mathematical models. One of the main approaches to solve pattern recognition problems is the statisti cal approach, which uses stochastic models of feature variables. Under the statistical approach, the first stage of pattern recognition theory development is characterized by the assumption that the probability data model is known exactly or it is esti mated from a representative sample of large size with negligible estimation errors (Das Gupta, 1973, 1977), (Rey, 1978), (Vasiljev, 1983))."
Harmonic analysis and probability have long enjoyed a mutually beneficial relationship that has been rich and fruitful. This monograph, aimed at researchers and students in these fields, explores several aspects of this relationship. The primary focus of the text is the nontangential maximal function and the area function of a harmonic function and their probabilistic analogues in martingale theory. The text first gives the requisite background material from harmonic analysis and discusses known results concerning the nontangential maximal function and area function, as well as the central and essential role these have played in the development of the field.The book next discusses further refinements of traditional results: among these are sharp good-lambda inequalities and laws of the iterated logarithm involving nontangential maximal functions and area functions. Many applications of these results are given. Throughout, the constant interplay between probability and harmonic analysis is emphasized and explained. The text contains some new and many recent results combined in a coherent presentation.
This volume has been created in honor of the seventieth birthday of Ted Harris, which was celebrated on January 11th, 1989. The papers rep resent the wide range of subfields of probability theory in which Ted has made profound and fundamental contributions. This breadth in Ted's research complicates the task of putting together in his honor a book with a unified theme. One common thread noted was the spatial, or geometric, aspect of the phenomena Ted investigated. This volume has been organized around that theme, with papers covering four major subject areas of Ted's research: branching processes, percola tion, interacting particle systems, and stochastic flows. These four topics do not* exhaust his research interests; his major work on Markov chains is commemorated in the standard technology "Harris chain" and "Harris recurrent" . The editors would like to take this opportunity to thank the speakers at the symposium and the contributors to this volume. Their enthusi astic support is a tribute to Ted Harris. We would like to express our appreciation to Annette Mosley for her efforts in typing the manuscripts and to Arthur Ogawa for typesetting the volume. Finally, we gratefully acknowledge the National Science Foundation and the University of South ern California for their financial support.
During the second half of the 20th century, Murray Rosenblatt was one of the most celebrated and leading figures in probability and statistics. Among his many contributions, Rosenblatt conducted seminal work on density estimation, central limit theorems under strong mixing conditions, spectral domain methodology, long memory processes and Markov processes. He has published over 130 papers and 5 books, many as relevant today as when they first appeared decades ago. Murray Rosenblatt was one of the founding members of the Department of Mathematics at the University of California at San Diego (UCSD) and served as advisor to over twenty PhD students. He maintains a close association with UCSD in his role as Professor Emeritus. This volume is a celebration of Murray Rosenblatt's stellar research career that spans over six decades, and includes some of his most interesting and influential papers. Several leading experts provide commentary and reflections on various directions of Murray's research portfolio."
This book describes some recent trends in GCM research on different subject areas, both theoretical and applied. This includes tools and possibilities for further work through new techniques and modification of existing ones. A growth curve is an empirical model of the evolution of a quantity over time. Growth curves in longitudinal studies are used in disciplines including biology, statistics, population studies, economics, biological sciences, sociology, nano-biotechnology, and fluid mechanics. The volume includes original studies, theoretical findings and case studies from a wide range of applied work. This volume builds on presentations from a GCM workshop held at the Indian Statistical Institute, Giridih, January 18-19, 2014. This book follows the volume Advances in Growth Curve Models, published by Springer in 2013. The results have meaningful application in health care, prediction of crop yield, child nutrition, poverty measurements, estimation of growth rate, and other research areas. |
![]() ![]() You may like...
Web Services - Concepts, Methodologies…
Information Reso Management Association
Hardcover
R9,720
Discovery Miles 97 200
Performance and Dependability in Service…
Valeria Cardellini, Emiliano Casalicchio, …
Hardcover
R5,423
Discovery Miles 54 230
|