![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
The place in survival analysis now occupied by proportional hazards models and their generalizations is so large that it is no longer conceivable to offer a course on the subject without devoting at least half of the content to this topic alone. This book focuses on the theory and applications of a very broad class of models - proportional hazards and non-proportional hazards models, the former being viewed as a special case of the latter - which underlie modern survival analysis. Researchers and students alike will find that this text differs from most recent works in that it is mostly concerned with methodological issues rather than the analysis itself.
In recent years, the theory has become widely accepted and has been further developed, but a detailed introduction is needed in order to make the material available and accessible to a wide audience. This will be the first book providing such an introduction, covering core theory and recent developments which can be applied to many application areas. All authors of individual chapters are leading researchers on the specific topics, assuring high quality and up-to-date contents. An Introduction to Imprecise Probabilities provides a comprehensive introduction to imprecise probabilities, including theory and applications reflecting the current state if the art. Each chapter is written by experts on the respective topics, including: Sets of desirable gambles; Coherent lower (conditional) previsions; Special cases and links to literature; Decision making; Graphical models; Classification; Reliability and risk assessment; Statistical inference; Structural judgments; Aspects of implementation (including elicitation and computation); Models in finance; Game- theoretic probability; Stochastic processes (including Markov chains); Engineering applications. Essential reading for researchers in academia, research institutes and other organizations, as well as practitioners engaged in areas such as risk analysis and engineering.
Linear regression is an important area of statistics, theoretical or applied. There have been a large number of estimation methods proposed and developed for linear regression. Each has its own competitive edge but none is good for all purposes. This manuscript focuses on construction of an adaptive combination of two estimation methods. The purpose of such adaptive methods is to help users make an objective choice and to combine desirable properties of two estimators.
Apart from the underlying theme that all the contributions to this volume pertain to models set in an infinite dimensional space, they differ on many counts. Some were written in the early seventies while others are reports of ongoing research done especially with this volume in mind. Some are surveys of material that can, at least at this point in time, be deemed to have attained a satisfactory solution of the problem, while oth ers represent initial forays into an original and novel formulation. Some furnish alternative proofs of known, and by now, classical results, while others can be seen as groping towards and exploring formulations that have not yet reached a definitive form. The subject matter also has a wide leeway, ranging from solution concepts for economies to those for games and also including representation of preferences and discussion of purely mathematical problems, all within the rubric of choice variables belonging to an infinite dimensional space, interpreted as a commodity space or as a strategy space. Thus, this is a collective enterprise in a fairly wide sense of the term and one with the diversity of which we have interfered as little as possible. Our motivation for bringing all of this work under one set of covers was severalfold."
This book is devoted to Corrado Gini, father of the Italian statistical school. It celebrates the 50th anniversary of his death by bearing witness to the continuing extraordinary scientific relevance of his interdisciplinary interests. The book comprises a selection of the papers presented at the conference of the Italian Statistical Society, Statistics and Demography - the Legacy of Corrado Gini, held in Treviso in September 2015. The work covers many topics linked to Gini's scientific legacy, ranging from the theory of statistical inference to multivariate statistical analysis, demography and sociology. In this volume, readers will find many interesting contributions on entropy measures, permutation procedures for the heterogeneity test, robust estimation of skew-normal parameters, S-weighted estimator, measures of multidimensional performance using Gini's delta, small-sample confidence intervals for Gini's gamma index, Bayesian estimation of the Gini-Simpson index, spatial residential patterns of selected foreign groups, minority segregation processes, dynamic time warping to study cruise tourism, and financial stress spill over. This book will appeal to all statisticians, demographers, economists, and sociologists interested in the field.
This book gives a comprehensive review of results for associated sequences and demimartingales developed so far, with special emphasis on demimartingales and related processes. Probabilistic properties of associated sequences, demimartingales and related processes are discussed in the first six chapters. Applications of some of these results to some problems in nonparametric statistical inference for such processes are investigated in the last three chapters.
The Handbook is a definitive reference source and teaching aid for
econometricians. It examines models, estimation theory, data
analysis and field applications in econometrics. Comprehensive
surveys, written by experts, discuss recent developments at a level
suitable for professional use by economists, econometricians,
statisticians, and in advanced graduate econometrics courses. For
more information on the Handbooks in Economics series, please see
our home page on http: //www.elsevier.nl/locate/hes
The finite element method is a numerical method widely used in engineering. This reference text is the first to discuss finite element methods for structures with large stochastic variations. Graduate students, lecturers, and researchers in mathematics, engineering, and scientific computation will find this a very useful reference
VLSI CADhas greatly bene?ted from the use of reduced ordered Binary Decision Diagrams (BDDs) and the clausal representation as a problem of Boolean Satis?ability (SAT), e.g. in logic synthesis, ver- cation or design-for-testability. In recent practical applications, BDDs are optimized with respect to new objective functions for design space exploration. The latest trends show a growing number of proposals to fuse the concepts of BDD and SAT. This book gives a modern presentation of the established as well as of recent concepts. Latest results in BDD optimization are given, c- ering di?erent aspects of paths in BDDs and the use of e?cient lower bounds during optimization. The presented algorithms include Branch ? and Bound and the generic A -algorithm as e?cient techniques to - plore large search spaces. ? The A -algorithm originates from Arti?cial Intelligence (AI), and the EDA community has been unaware of this concept for a long time. Re- ? cently, the A -algorithm has been introduced as a new paradigm to explore design spaces in VLSI CAD. Besides AI search techniques, the book also discusses the relation to another ?eld of activity bordered to VLSI CAD and BDD optimization: the clausal representation as a SAT problem.
This book provides a comprehensive review of environmental benefit transfer methods, issues and challenges, covering topics relevant to researchers and practitioners. Early chapters provide accessible introductory materials suitable for non-economists. These chapters also detail how benefit transfer is used within the policy process. Later chapters cover more advanced topics suited to valuation researchers, graduate students and those with similar knowledge of economic and statistical theory and methods. This book provides the most complete coverage of environmental benefit transfer methods available in a single location. The book targets a wide audience, including undergraduate and graduate students, practitioners in economics and other disciplines looking for a one-stop handbook covering benefit transfer topics and those who wish to apply or evaluate benefit transfer methods. It is designed for those both with and without training in economics
Multiparameter processes extend the existing one-parameter theory of random processes in an elegant way, and have found connections to diverse disciplines such as probability theory, real and functional analysis, group theory, analytic number theory, and group renormalization in mathematical physics, to name a few. This book lays the foundation of aspects of the rapidly developing subject of random fields, and is designed for a second graduate course in probability and beyond. Its intended audience is pure, as well as applied, mathematicians.
In earlier forewords to the books in this series on Discrete Event Dynamic Systems (DEDS), we have dwelt on the pervasive nature of DEDS in our human-made world. From manufacturing plants to computer/communication networks, from traffic systems to command-and-control, modern civilization cannot function without the smooth operation of such systems. Yet mathemat ical tools for the analysis and synthesis of DEDS are nascent when compared to the well developed machinery of the continuous variable dynamic systems char acterized by differential equations. The performance evaluation tool of choice for DEDS is discrete event simulation both on account of its generality and its explicit incorporation of randomness. As it is well known to students of simulation, the heart of the random event simulation is the uniform random number generator. Not so well known to the practitioners are the philosophical and mathematical bases of generating "random" number sequence from deterministic algorithms. This editor can still recall his own painful introduction to the issues during the early 80's when he attempted to do the first perturbation analysis (PA) experiments on a per sonal computer which, unbeknownst to him, had a random number generator with a period of only 32,768 numbers. It is no exaggeration to say that the development of PA was derailed for some time due to this ignorance of the fundamentals of random number generation."
Steady progress in recent years has been made in understanding the special mathematical features of certain exactly solvable models in statistical mechanics and quantum field theory, including the scaling limits of the 2-D Ising (lattice) model, and more generally, a class of 2-D quantum fields known as holonomic fields. New results have made it possible to obtain a detailed nonperturbative analysis of the multi-spin correlations. In particular, the book focuses on deformation analysis of the scaling functions of the Ising model, and will appeal to graduate students, mathematicians, and physicists interested in the mathematics of statistical mechanics and quantum field theory.
Robust statistics is an extension of classical statistics that specifically takes into account the concept that the underlying models used to describe data are only approximate. Its basic philosophy is to produce statistical procedures which are stable when the data do not exactly match the postulated models as it is the case for example with outliers. "Robust Methods in Biostatistics" proposes robust alternatives to common methods used in statistics in general and in biostatistics in particular and illustrates their use on many biomedical datasets. The methods introduced include robust estimation, testing, model selection, model check and diagnostics. They are developed for the following general classes of models: Linear regressionGeneralized linear modelsLinear mixed modelsMarginal longitudinal data modelsCox survival analysis model The methods are introduced both at a theoretical and applied level within the framework of each general class of models, with a particular emphasis put on practical data analysis. This book is of particular use for research students, applied statisticians and practitioners in the health field interested in more stable statistical techniques. An accompanying website provides R code for computing all of the methods described, as well as for analyzing all the datasets used in the book.
The book deals with some of the fundamental issues of risk assessment in grid computing environments. The book describes the development of a hybrid probabilistic and possibilistic model for assessing the success of a computing task in a grid environment
Single Subject Designs in Biomedicine draws upon the rich history of single case research within the educational and behavioral research settings and extends the application to the field of biomedicine. Biomedical illustrations are used to demonstrate the processes of designing, implementing, and evaluating a single subject design. Strengths and limitations of various methodologies are presented, along with specific clinical areas of application in which these applications would be appropriate. Statistical and visual techniques for data analysis are also discussed. The breadth and depth of information provided is suitable for medical students in research oriented courses, primary care practitioners and medical specialists seeking to apply methods of evidence practice to improve patient care, and medical researchers who are expanding their methodological expertise to include single subject designs. Increasing awareness of the utility in the single subject design could enhance treatment approach and evaluation both in biomedical research and medical care settings.
The contributions in this book focus on a variety of topics related to discrepancy theory, comprising Fourier techniques to analyze discrepancy, low discrepancy point sets for quasi-Monte Carlo integration, probabilistic discrepancy bounds, dispersion of point sets, pair correlation of sequences, integer points in convex bodies, discrepancy with respect to geometric shapes other than rectangular boxes, and also open problems in discrepany theory.
The concept of ridges has appeared numerous times in the image processing liter ature. Sometimes the term is used in an intuitive sense. Other times a concrete definition is provided. In almost all cases the concept is used for very specific ap plications. When analyzing images or data sets, it is very natural for a scientist to measure critical behavior by considering maxima or minima of the data. These critical points are relatively easy to compute. Numerical packages always provide support for root finding or optimization, whether it be through bisection, Newton's method, conjugate gradient method, or other standard methods. It has not been natural for scientists to consider critical behavior in a higher-order sense. The con cept of ridge as a manifold of critical points is a natural extension of the concept of local maximum as an isolated critical point. However, almost no attention has been given to formalizing the concept. There is a need for a formal development. There is a need for understanding the computation issues that arise in the imple mentations. The purpose of this book is to address both needs by providing a formal mathematical foundation and a computational framework for ridges. The intended audience for this book includes anyone interested in exploring the use fulness of ridges in data analysis."
Computational inference is based on an approach to statistical methods that uses modern computational power to simulate distributional properties of estimators and test statistics. This book describes computationally intensive statistical methods in a unified presentation, emphasizing techniques, such as the PDF decomposition, that arise in a wide range of methods.
Noted for its integration of real-world data and case studies, this text offers sound coverage of the theoretical aspects of mathematical statistics. The authors demonstrate how and when to use statistical methods, while reinforcing the calculus that students have mastered in previous courses. Throughout the Fifth Edition, the authors have added and updated examples and case studies, while also refining existing features that show a clear path from theory to practice.
The main theme of this monograph is "comparative statistical inference. " While the topics covered have been carefully selected (they are, for example, restricted to pr- lems of statistical estimation), my aim is to provide ideas and examples which will assist a statistician, or a statistical practitioner, in comparing the performance one can expect from using either Bayesian or classical (aka, frequentist) solutions in - timation problems. Before investing the hours it will take to read this monograph, one might well want to know what sets it apart from other treatises on comparative inference. The two books that are closest to the present work are the well-known tomes by Barnett (1999) and Cox (2006). These books do indeed consider the c- ceptual and methodological differences between Bayesian and frequentist methods. What is largely absent from them, however, are answers to the question: "which - proach should one use in a given problem?" It is this latter issue that this monograph is intended to investigate. There are many books on Bayesian inference, including, for example, the widely used texts by Carlin and Louis (2008) and Gelman, Carlin, Stern and Rubin (2004). These books differ from the present work in that they begin with the premise that a Bayesian treatment is called for and then provide guidance on how a Bayesian an- ysis should be executed. Similarly, there are many books written from a classical perspective.
A timely and applied approach to the newly discovered methods and applications of U-statistics Built on years of collaborative research and academic experience, Modern Applied U-Statistics successfully presents a thorough introduction to the theory of U-statistics using in-depth examples and applications that address contemporary areas of study including biomedical and psychosocial research. Utilizing a "learn by example" approach, this book provides an accessible, yet in-depth, treatment of U-statistics, as well as addresses key concepts in asymptotic theory by integrating translational and cross-disciplinary research. The authors begin with an introduction of the essential and theoretical foundations of U-statistics such as the notion of convergence in probability and distribution, basic convergence results, stochastic Os, inference theory, generalized estimating equations, as well as the definition and asymptotic properties of U-statistics. With an emphasis on nonparametric applications when and where applicable, the authors then build upon this established foundation in order to equip readers with the knowledge needed to understand the modern-day extensions of U-statistics that are explored in subsequent chapters. Additional topical coverage includes: Longitudinal data modeling with missing data Parametric and distribution-free mixed-effect and structural equation models A new multi-response based regression framework for non-parametric statistics such as the product moment correlation, Kendall's tau, and Mann-Whitney-Wilcoxon rank tests A new class of U-statistic-based estimating equations (UBEE) for dependent responses Motivating examples, in-depthillustrations of statistical and model-building concepts, and an extensive discussion of longitudinal study designs strengthen the real-world utility and comprehension of this book. An accompanying Web site features SAS(R) and S-Plus(R) program codes, software applications, and additional study data. Modern Applied U-Statistics accommodates second- and third-year students of biostatistics at the graduate level and also serves as an excellent self-study for practitioners in the fields of bioinformatics and psychosocial research.
During the last decades, there has been an explosion in computation and information technology. This development comes with an expansion of complex observational studies and clinical trials in a variety of fields such as medicine, biology, epidemiology, sociology, and economics among many others, which involve collection of large amounts of data on subjects or organisms over time. The goal of such studies can be formulated as estimation of a finite dimensional parameter of the population distribution corresponding to the observed time-dependent process. Such estimation problems arise in survival analysis, causal inference and regression analysis. This book provides a fundamental statistical framework for the analysis of complex longitudinal data. It provides the first comprehensive description of optimal estimation techniques based on time-dependent data structures subject to informative censoring and treatment assignment in so called semiparametric models. Semiparametric models are particularly attractive since they allow the presence of large unmodeled nuisance parameters. These techniques include estimation of regression parameters in the familiar (multivariate) generalized linear regression and multiplicative intensity models. They go beyond standard statistical approaches by incorporating all the observed data to allow for informative censoring, to obtain maximal efficiency, and by developing estimators of causal effects. It can be used to teach masters and Ph.D. students in biostatistics and statistics and is suitable for researchers in statistics with a strong interest in the analysis of complex longitudinal data. |
You may like...
Order Statistics: Applications, Volume…
Narayanaswamy Balakrishnan, C.R. Rao
Hardcover
R3,377
Discovery Miles 33 770
Fundamentals of Social Research Methods
Claire Bless, Craig Higson-Smith, …
Paperback
Pearson Edexcel International A Level…
Joe Skrakowski, Harry Smith
Paperback
R969
Discovery Miles 9 690
Statistics For Business And Economics
David Anderson, James Cochran, …
Paperback
(1)
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
|