![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
This thesis presents a new method for following evolving interactions between coupled oscillatory systems of the kind that abound in nature. Examples range from the subcellular level, to ecosystems, through climate dynamics, to the movements of planets and stars. Such systems mutually interact, adjusting their internal clocks, and may correspondingly move between synchronized and non-synchronized states. The thesis describes a way of using Bayesian inference to exploit the presence of random fluctuations, thus analyzing these processes in unprecedented detail. It first develops the basic theory of interacting oscillators whose frequencies are non-constant, and then applies it to the human heart and lungs as an example. Their coupling function can be used to follow with great precision the transitions into and out of synchronization. The method described has the potential to illuminate the ageing process as well as to improve diagnostics in cardiology, anesthesiology and neuroscience, and yields insights into a wide diversity of natural processes.
The subject of pattern analysis and recognition pervades many aspects of our daily lives, including user authentication in banking, object retrieval from databases in the consumer sector, and the omnipresent surveillance and security measures around sensitive areas. Shape analysis, a fundamental building block in many approaches to these applications, is also used in statistics, biomedical applications (Magnetic Resonance Imaging), and many other related disciplines. With contributions from some of the leading experts and pioneers in the field, this self-contained, unified volume is the first comprehensive treatment of theory, methods, and algorithms available in a single resource. Developments are discussed from a rapidly increasing number of research papers in diverse fields, including the mathematical and physical sciences, engineering, and medicine.
The aim of this book is to summarize probabilistic safety assessment (PSA) of nuclear power plants (NPPs), and to demonstrate that NPPs can be considered a safe method of producing energy, even in light of the Fukushima accident. The book examines level 1 and 2 full power, low power and shutdown probabilistic safety assessment of WWER440 reactors, and summarizes the author s experience gained during the last 35 years. It provides useful examples taken from PSA training courses delivered by the author and organized by the International Atomic Energy Agency. Such training courses were organised in Argonne National Laboratory (Chicago, IL, USA), Abdus Salaam International Centre for Theoretical Physics (Trieste, Italy) in Malaysia, Vietnam and Jordan to support experts from developing countries. The role of probabilistic safety assessment (PSA) for NPPs (nuclear power plants) is an estimation of the risks in absolute terms and in comparison with other risks of the technical and the natural world. Plant-specific PSAs are being prepared for the NPPs and being applied for detection of weaknesses, design improvement and backfitting, incident analysis, accident management, emergency preparedness, prioritization of research & development and support of regulatory activities. There are three levels of PSA, being performed for full power and low power operation and shutdown operating modes of the plant: Level 1 PSA, Level 2 PSA and Level 3 PSA. The nuclear regulatory authorities do not require the level 3 PSA for NPPs in the member countries of the European Union. So, only limited number of NPPs has available the level 3 PSA in Europe. However, in the light of the Fukushima accident the performance of such analyses is strongly recommended in the future. This book is intended for professionals working in the nuclear industry, and researchers and students interested in nuclear research. "
Interactive Particle Systems is a branch of Probability Theory with close connections to Mathematical Physics and Mathematical Biology. In 1985, the author wrote a book (T. Liggett, Interacting Particle System, ISBN 3-540-96069) that treated the subject as it was at that time. The present book takes three of the most important models in the area, and traces advances in our understanding of them since 1985. In so doing, many of the most useful techniques in the field are explained and developed, so that they can be applied to other models and in other contexts. Extensive Notes and References sections discuss other work on these and related models. Readers are expected to be familiar with analysis and probability at the graduate level, but it is not assumed that they have mastered the material in the 1985 book. This book is intended for graduate students and researchers in Probability Theory, and in related areas of Mathematics, Biology and Physics.
Applied probability is a broad research area that is of interest to scientists in diverse disciplines in science and technology, including: anthropology, biology, communication theory, economics, epidemiology, finance, geography, linguistics, medicine, meteorology, operations research, psychology, quality control, sociology, and statistics. Recent Advances in Applied Probability is a collection of survey articles that bring together the work of leading researchers in applied probability to present current research advances in this important area. This volume will be of interest to graduate students and researchers whose research is closely connected to probability modelling and their applications. It is suitable for one semester graduate level research seminar in applied probability.
This book covers all types of literature on existing trend analysis approaches, but more than 60% of the methodologies are developed here and some of them are reflected to scientific literature and others are also innovative versions, modifications or improvements. The suggested methodologies help to design, develop, manage and deliver scientific applications and training to meet the needs of interested staff in companies, industries and universities including students. Technical content and expertise are also provided from different theoretical and especially active roles in the design, development and delivery of science in particular and economics and business in general. It is also ensured that, wherever possible and technically appropriate, priority is given to the inclusion and integration of real life data, examples and processes within the book content. The time seems right, because available books just focus on special sectors (fashion, social, business). This book reviews all the available trend approaches in the present literature on rational and logical bases.
Statistical Tools for Nonlinear Regression, Second Edition, presents methods for analyzing data using parametric nonlinear regression models. The new edition has been expanded to include binomial, multinomial and Poisson non-linear models. Using examples from experiments in agronomy and biochemistry, it shows how to apply these methods. It concentrates on presenting the methods in an intuitive way rather than developing the theoretical backgrounds. The examples are analyzed with the free software nls2 updated to deal with the new models included in the second edition. The nls2 package is implemented in S-PLUS and R. Its main advantages are to make the model building, estimation and validation tasks, easy to do. More precisely, Complex models can be easily described using a symbolic syntax. The regression function as well as the variance function can be defined explicitly as functions of independent variables and of unknown parameters or they can be defined as the solution to a system of differential equations. Moreover, constraints on the parameters can easily be added to the model. It is thus possible to test nested hypotheses and to compare several data sets. Several additional tools are included in the package for calculating confidence regions for functions of parameters or calibration intervals, using classical methodology or bootstrap. Some graphical tools are proposed for visualizing the fitted curves, the residuals, the confidence regions, and the numerical estimation procedure.
About 8000 clinical trials are undertaken annually in all areas of medicine, from the treatment of acne to the prevention of cancer. Correct interpretation of the data from such trials depends largely on adequate design and on performing the appropriate statistical analyses. In this book, the statistical aspects of both the design and analysis of trials are described, with particular emphasis on recently developed methods of analysis.
After the ?rst edition of this book was published in early 2005, the world has changed dramatically and at a pace never seen before. The changes that - curred in 2008 and 2009 were completely unthinkable two years before. These changes took place not only in the Finance sector, the origin of the crisis, but also, as a result, in other economic sectors like the automotive sector. Governments now own substantial parts, if not majorities, in banks or other companies which recorded losses of double digit billions of USD in 2008. 2008 saw the collapse of leading stand-alone U. S. investment banks. In many co- tries interest rates fell close to zero. What has happend? While the economy showed strong growth in 2004 to 2006, the Subprime or Credit Crisis changed the picture completely. What started in the U. S. ho- ing market in late 2006 became a full-?edged global ?nancial crisis and has a?ected ?nancial markets around the world. A decline in U. S. house prices and increasing interest rates caused a higher rate of subprime mortgage delinqu- cies in the U. S. and, due to the wide distribution of securitized assets, had a negative e?ect on other markets. As a result, markets realized that risks had been underestimated and volatility increased. This development culminated in the bankruptcy of the investment bank Lehman Brothers in mid September 2008.
The book is a comprehensive, self-contained introduction to the mathematical modeling and analysis of disease transmission models. It includes (i) an introduction to the main concepts of compartmental models including models with heterogeneous mixing of individuals and models for vector-transmitted diseases, (ii) a detailed analysis of models for important specific diseases, including tuberculosis, HIV/AIDS, influenza, Ebola virus disease, malaria, dengue fever and the Zika virus, (iii) an introduction to more advanced mathematical topics, including age structure, spatial structure, and mobility, and (iv) some challenges and opportunities for the future. There are exercises of varying degrees of difficulty, and projects leading to new research directions. For the benefit of public health professionals whose contact with mathematics may not be recent, there is an appendix covering the necessary mathematical background. There are indications which sections require a strong mathematical background so that the book can be useful for both mathematical modelers and public health professionals.
Devoted to a systematic exposition of some recent developments in
the theory of discrete-time Markov control processes, the text is
mainly confined to MCPs with Borel state and control spaces.
Although the book follows on from the author's earlier work, an
important feature of this volume is that it is self-contained and
can thus be read independently of the first.
Survival analysis arises in many fields of study including medicine, biology, engineering, public health, epidemiology, and economics. This book provides a comprehensive treatment of Bayesian survival analysis. Several topics are addressed, including parametric models, semiparametric models based on prior processes, proportional and non-proportional hazards models, frailty models, cure rate models, model selection and comparison, joint models for longitudinal and survival data, models with time varying covariates, missing covariate data, design and monitoring of clinical trials, accelerated failure time models, models for multivariate survival data, and special types of hierarchical survival models. Also various censoring schemes are examined including right and interval censored data. Several additional topics are discussed, including noninformative and informative prior specificiations, computing posterior qualities of interest, Bayesian hypothesis testing, variable selection, model selection with nonnested models, model checking techniques using Bayesian diagnostic methods, and Markov chain Monte Carlo (MCMC) algorithms for sampling from the posteiror and predictive distributions. The book presents a balance between theory and applications, and for each class of models discussed, detailed examples and analyses from case studies are presented whenever possible. The applications are all essentially from the health sciences, including cancer, AIDS, and the environment. The book is intended as a graduate textbook or a reference book for a one semester course at the advanced masters or Ph.D. level. This book would be most suitable for second or third year graduate students in statistics or biostatistics. It would also serve as a useful reference book for applied or theoretical researchers as well as practitioners. Joseph G. Ibrahim is Associate Professor of Biostatistics at the Harvard School of Public Health and Dana-Farber Cancer Institute; Ming-Hui Chen is Associate Professor of Mathematical Science at Worcester Polytechnic Institute; Debajyoti Sinha is Associate Professor of Biostatistics at the Medical University of South Carolina.
The first edition of Multivariate Statistical Modelling provided an extension of classical models for regression, time series, and longitudinal data to a much broader class including categorical data and smoothing concepts. Generalized linear modesl for univariate and multivariate analysis build the central concept, which for the modelling of complex data is widened to much more general modelling approaches. The primary aim of the new edition is to bring the book up-to-date and to reflect the major new developments over the past years. The authors give a detailed introductory survey of the subject based on the alaysis of real data drawn from a variety of subjects, including the biological sciences, economics, and the social sciences. Technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. The appendix serves as a reference or brief tutorial for the concepts of EM algorithm, numberical integration, MCMC and others. The topics covered inlude: Models for multi-categorial responses, model checking, semi- and nonparametric modelling, time series and longitudinal data, random effects models, state-space models, and survival analysis. In the new edition Bayesian concepts which are of growing importance in statistics are treated more extensively. The chapter on nonparametric and semiparametric generalized regression has been rewritten totally, random effects models now cover nonparametric maximum likelihood and fully Bayesian approaches, and state-space and hidden Markov models have been supplemented with an extension to models that can accommodate for spatial and spatiotemporal data. The authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, this book is ideally suited for applied statisticians, graduate students of statistics, and students and researchers with a strong interest in statistics and data analysis from econometrics, biometrics and the social sciences.
Group sequential methods answer the needs of clinical trial monitoring committees who must assess the data available at an interim analysis. These interim results may provide grounds for terminating the study-effectively reducing costs-or may benefit the general patient population by allowing early dissemination of its findings. Group sequential methods provide a means to balance the ethical and financial advantages of stopping a study early against the risk of an incorrect conclusion.
This book presents a unified theory of rare event simulation and the variance reduction technique known as importance sampling from the point of view of the probabilistic theory of large deviations. It allows us to view a vast assortment of simulation problems from a unified single perspective.
Monte Carlo simulation has become one of the most important tools in all fields of science. Simulation methodology relies on a good source of numbers that appear to be random. These "pseudorandom" numbers must pass statistical tests just as random samples would. Methods for producing pseudorandom numbers and transforming those numbers to simulate samples from various distributions are among the most important topics in statistical computing. This book surveys techniques of random number generation and the use of random numbers in Monte Carlo simulation. The book covers basic principles, as well as newer methods such as parallel random number generation, nonlinear congruential generators, quasi Monte Carlo methods, and Markov chain Monte Carlo. The best methods for generating random variates from the standard distributions are presented, but also general techniques useful in more complicated models and in novel settings are described. The emphasis throughout the book is on practical methods that work well in current computing environments. The book includes exercises and can be used as a test or supplementary text for various courses in modern statistics. It could serve as the primary test for a specialized course in statistical computing, or as a supplementary text for a course in computational statistics and other areas of modern statistics that rely on simulation. The book, which covers recent developments in the field, could also serve as a useful reference for practitioners. Although some familiarity with probability and statistics is assumed, the book is accessible to a broad audience. The second edition is approximately 50% longer than the first edition. It includes advances in methods for parallel random number generation, universal methods for generation of nonuniform variates, perfect sampling, and software for random number generation. The material on testing of random number generators has been expanded to include a discussion of newer software for testing, as well as more discussion about the tests themselves. The second edition has more discussion of applications of Monte Carlo methods in various fields, including physics and computational finance. James Gentle is University Professor of Computational Statistics at George Mason University. During a thirteen-year hiatus from academic work before joining George Mason, he was director of research and design at the world's largest independent producer of Fortran and C general-purpose scientific software libraries. These libraries implement several random number generators, and are widely used in Monte Carlo studies. He is a Fellow of the American Statistical Association and a member of the International Statistical Institute. He has held several national offices in the American Statistical Association and has served as an associate editor for journals of the ASA as well as for other journals in statistics and computing. Recent activities include serving as program director of statistics at the National Science Foundation and as research fellow at the Bureau of Labor Statistics.
The emphasis of this textbook is on industrial applications of Statistical Measurement Theory. It deals with the principal issues of measurement theory, is concise and intelligibly written, and to a wide extent self-contained. Difficult theoretical issues are separated from the mainstream presentation. Each topic starts with an informal introduction followed by an example, the rigorous problem formulation, solution method, and a detailed numerical solution. Each chapter concludes with a set of exercises of increasing difficulty, mostly with solutions. The book is meant as a text for graduate students and a reference for researchers and industrial experts specializing in measurement and measurement data analysis for quality control, quality engineering and industrial process improvement using statistical methods. Knowledge of calculus and fundamental probability and statistics is required for the understanding of its contents.
Clustering is an important unsupervised classification technique where data points are grouped such that points that are similar in some sense belong to the same cluster. Cluster analysis is a complex problem as a variety of similarity and dissimilarity measures exist in the literature. This is the first book focused on clustering with a particular emphasis on symmetry-based measures of similarity and metaheuristic approaches. The aim is to find a suitable grouping of the input data set so that some criteria are optimized, and using this the authors frame the clustering problem as an optimization one where the objectives to be optimized may represent different characteristics such as compactness, symmetrical compactness, separation between clusters, or connectivity within a cluster. They explain the techniques in detail and outline many detailed applications in data mining, remote sensing and brain imaging, gene expression data analysis, and face detection. The book will be useful to graduate students and researchers in computer science, electrical engineering, system science, and information technology, both as a text and as a reference book. It will also be useful to researchers and practitioners in industry working on pattern recognition, data mining, soft computing, metaheuristics, bioinformatics, remote sensing, and brain imaging.
Stochastic Analysis aims to provide mathematical tools to describe and model high dimensional random systems. Such tools arise in the study of Stochastic Differential Equations and Stochastic Partial Differential Equations, Infinite Dimensional Stochastic Geometry, Random Media and Interacting Particle Systems, Super-processes, Stochastic Filtering, Mathematical Finance, etc. Stochastic Analysis has emerged as a core area of late 20th century Mathematics and is currently undergoing a rapid scientific development. The special volume "Stochastic Analysis 2010" provides a sample of the current research in the different branches of the subject. It includes the collected works of the participants at the Stochastic Analysis section of the 7th ISAAC Congress organized at Imperial College London in July 2009.
grams of which the objective is given by the ratio of a convex by a positive (over a convex domain) concave function. As observed by Sniedovich (Ref. [102, 103]) most of the properties of fractional pro grams could be found in other programs, given that the objective function could be written as a particular composition of functions. He called this new field C programming, standing for composite concave programming. In his seminal book on dynamic programming (Ref. [104]), Sniedovich shows how the study of such com positions can help tackling non-separable dynamic programs that otherwise would defeat solution. Barros and Frenk (Ref. [9]) developed a cutting plane algorithm capable of optimizing C-programs. More recently, this algorithm has been used by Carrizosa and Plastria to solve a global optimization problem in facility location (Ref. [16]). The distinction between global optimization problems (Ref. [54]) and generalized convex problems can sometimes be hard to establish. That is exactly the reason why so much effort has been placed into finding an exhaustive classification of the different weak forms of convexity, establishing a new definition just to satisfy some desirable property in the most general way possible. This book does not aim at all the subtleties of the different generalizations of convexity, but concentrates on the most general of them all, quasiconvex programming. Chapter 5 shows clearly where the real difficulties appear.
Probabilistic and Statistical Methods in Computer Science
This book gives a comprehensive introduction to the modeling of financial derivatives, covering all major asset classes (equities, commodities, interest rates and foreign exchange) and stretching from Black and Scholes' lognormal modeling to current-day research on skew and smile models. The intended reader has a solid mathematical background and is a graduate/final-year undergraduate student specializing in Mathematical Finance, or works at a financial institution such as an investment bank or a hedge fund.
Promptly growing demand for telecommunication services and information interchange has led to the fact that communication became one of the most dynamical branches of an infrastructure of a modern society. The book introduces to the bases of classical MDP theory; problems of a finding optimal CAC in models are investigated and various problems of improvement of characteristics of traditional and multimedia wireless communication networks are considered together with both classical and new methods of theory MDP which allow defining optimal strategy of access in teletraffic systems. The book will be useful to specialists in the field of telecommunication systems and also to students and post-graduate students of corresponding specialties. |
You may like...
Cambridge Primary Computing Learner's…
Roland Birbal, Michele Taylor, …
Paperback
R742
Discovery Miles 7 420
Grit - Why Passion & Resilience Are The…
Angela Duckworth
Paperback
(3)
|