0
Your cart

Your cart is empty

Browse All Departments
Price
  • R50 - R100 (1)
  • R100 - R250 (46)
  • R250 - R500 (352)
  • R500+ (12,280)
  • -
Status
Format
Author / Contributor
Publisher

Books > Science & Mathematics > Mathematics > Probability & statistics

Introductory Business Statistics, International Edition (with Bind in Printed Access Card) (Paperback, International Edition):... Introductory Business Statistics, International Edition (with Bind in Printed Access Card) (Paperback, International Edition)
Ronald Weiers
R1,274 R1,192 Discovery Miles 11 920 Save R82 (6%) Ships in 10 - 15 working days

Highly praised for its exceptional clarity, conversational style and useful examples, Introductory Business Statistics, 7e, International Edition was written specifically for you. This proven, popular text cuts through the jargon to help you understand fundamental statistical concepts and why they are important to you, your world, and your career. The text's outstanding illustrations, friendly language, non-technical terminology, and current, real-world examples will capture your interest and prepare you for success right from the start.

Metrical Theory of Continued Fractions (Hardcover, 2002 ed.): M. Iosifescu, Cor Kraaikamp Metrical Theory of Continued Fractions (Hardcover, 2002 ed.)
M. Iosifescu, Cor Kraaikamp
R2,869 Discovery Miles 28 690 Ships in 18 - 22 working days

This monograph is intended to be a complete treatment of the metrical the ory of the (regular) continued fraction expansion and related representations of real numbers. We have attempted to give the best possible results known so far, with proofs which are the simplest and most direct. The book has had a long gestation period because we first decided to write it in March 1994. This gave us the possibility of essentially improving the initial versions of many parts of it. Even if the two authors are different in style and approach, every effort has been made to hide the differences. Let 0 denote the set of irrationals in I = [0,1]. Define the (reg ular) continued fraction transformation T by T (w) = fractional part of n 1/w, w E O. Write T for the nth iterate of T, n E N = {O, 1, ... }, n 1 with TO = identity map. The positive integers an(w) = al(T - (W)), n E N+ = {1,2*** }, where al(w) = integer part of 1/w, w E 0, are called the (regular continued fraction) digits of w. Writing . for arbitrary indeterminates Xi, 1 :::; i :::; n, we have w = lim [al(w),*** , an(w)], w E 0, n--->oo thus explaining the name of T. The above equation will be also written as w = lim [al(w), a2(w),***], w E O.

Basic Principles and Applications of Probability Theory (Hardcover, 2005 ed.): Y.V. Prokhorov Basic Principles and Applications of Probability Theory (Hardcover, 2005 ed.)
Y.V. Prokhorov; Translated by B. Seckler; Valeriy Skorokhod
R2,679 Discovery Miles 26 790 Ships in 18 - 22 working days

The book is an introduction to modern probability theory written by one of the famous experts in this area. Readers will learn about the basic concepts of probability and its applications, preparing them for more advanced and specialized works.

Exercises and Projects for The Little SAS Book, Sixth Edition (Hardcover): Rebecca A Ottesen Exercises and Projects for The Little SAS Book, Sixth Edition (Hardcover)
Rebecca A Ottesen
R804 Discovery Miles 8 040 Ships in 10 - 15 working days
Foundations of Info-Metrics - Modeling, Inference, and Imperfect Information (Hardcover): Amos Golan Foundations of Info-Metrics - Modeling, Inference, and Imperfect Information (Hardcover)
Amos Golan
R3,308 Discovery Miles 33 080 Ships in 10 - 15 working days

Info-metrics is the science of modeling, reasoning, and drawing inferences under conditions of noisy and insufficient information. It is at the intersection of information theory, statistical inference, and decision-making under uncertainty. It plays an important role in helping make informed decisions even when there is inadequate or incomplete information because it provides a framework to process available information with minimal reliance on assumptions that cannot be validated. In this pioneering book, Amos Golan, a leader in info-metrics, focuses on unifying information processing, modeling and inference within a single constrained optimization framework. Foundations of Info-Metrics provides an overview of modeling and inference, rather than a problem specific model, and progresses from the simple premise that information is often insufficient to provide a unique answer for decisions we wish to make. Each decision, or solution, is derived from the available input information along with a choice of inferential procedure. The book contains numerous multidisciplinary applications and case studies, which demonstrate the simplicity and generality of the framework in real world settings. Examples include initial diagnosis at an emergency room, optimal dose decisions, election forecasting, network and information aggregation, weather pattern analyses, portfolio allocation, strategy inference for interacting entities, incorporation of prior information, option pricing, and modeling an interacting social system. Graphical representations illustrate how results can be visualized while exercises and problem sets facilitate extensions. This book is this designed to be accessible for researchers, graduate students, and practitioners across the disciplines.

Analysis of Variance for Sensory Data (Hardcover, New): Plea Analysis of Variance for Sensory Data (Hardcover, New)
Plea
R3,856 Discovery Miles 38 560 Ships in 18 - 22 working days

The field of sensory science, the perception science of the food industry, increasingly requires a working knowledge of statistics for the evaluation of data. However, most sensory scientists are not also expert statisticians. This highly readable book presents complex statistical tools such as Anova in a way that is easily understood by the practising sensory scientist. In Analysis of Variance for sensory Data, written jointly by statisticians and food scientists, the reader is taken by the hand and guided through tests such as Anova. Using real examples from the food industry, practical implications are stressed rather than the theoretical background. The result of this is that the reader will be able to apply advanced Anova teqhniques to a variety of problems and learn how to interpret the results. The book is intended as a workbook for all students of sensory analysis who would gain from a knowledge of statistical techniques.

Reliability and Life-Cycle Analysis of Deteriorating Systems (Hardcover, 1st ed. 2016): Mauricio Sanchez-Silva, Georgia-Ann... Reliability and Life-Cycle Analysis of Deteriorating Systems (Hardcover, 1st ed. 2016)
Mauricio Sanchez-Silva, Georgia-Ann Klutke
R5,101 Discovery Miles 51 010 Ships in 10 - 15 working days

This book compiles and critically discusses modern engineering system degradation models and their impact on engineering decisions. In particular, the authors focus on modeling the uncertain nature of degradation considering both conceptual discussions and formal mathematical formulations. It also describes the basics concepts and the various modeling aspects of life-cycle analysis (LCA). It highlights the role of degradation in LCA and defines optimum design and operation parameters. Given the relationship between operational decisions and the performance of the system's condition over time, maintenance models are also discussed. The concepts and models presented have applications in a large variety of engineering fields such as Civil, Environmental, Industrial, Electrical and Mechanical engineering. However, special emphasis is given to problems related to large infrastructure systems. The book is intended to be used both as a reference resource for researchers and practitioners and as an academic text for courses related to risk and reliability, infrastructure performance modeling and life-cycle assessment.

Probabilistic Logic Networks - A Comprehensive Framework for Uncertain Inference (Hardcover, 1st Edition.
2nd Printing. 2008):... Probabilistic Logic Networks - A Comprehensive Framework for Uncertain Inference (Hardcover, 1st Edition. 2nd Printing. 2008)
Ben Goertzel, Matthew Ikle, Izabela Freire Goertzel, Ari Heljakka
R4,053 Discovery Miles 40 530 Ships in 18 - 22 working days

Abstract In this chapter we provide an overview of probabilistic logic networks (PLN), including our motivations for developing PLN and the guiding principles underlying PLN. We discuss foundational choices we made, introduce PLN knowledge representation, and briefly introduce inference rules and truth-values. We also place PLN in context with other approaches to uncertain inference. 1.1 Motivations This book presents Probabilistic Logic Networks (PLN), a systematic and pragmatic framework for computationally carrying out uncertain reasoning - r- soning about uncertain data, and/or reasoning involving uncertain conclusions. We begin with a few comments about why we believe this is such an interesting and important domain of investigation. First of all, we hold to a philosophical perspective in which "reasoning" - properly understood - plays a central role in cognitive activity. We realize that other perspectives exist; in particular, logical reasoning is sometimes construed as a special kind of cognition that humans carry out only occasionally, as a deviation from their usual (intuitive, emotional, pragmatic, sensorimotor, etc.) modes of thought. However, we consider this alternative view to be valid only according to a very limited definition of "logic." Construed properly, we suggest, logical reasoning may be understood as the basic framework underlying all forms of cognition, including those conventionally thought of as illogical and irrational.

Advanced Linear Modeling - Multivariate, Time Series, and Spatial Data; Nonparametric Regression and Response Surface... Advanced Linear Modeling - Multivariate, Time Series, and Spatial Data; Nonparametric Regression and Response Surface Maximization (Hardcover, 2nd ed. 2001)
Ronald Christensen
R1,632 Discovery Miles 16 320 Ships in 18 - 22 working days

This book introduces several topics related to linear model theory: multivariate linear models, discriminant analysis, principal components, factor analysis, time series in both the frequency and time domains, and spatial data analysis. The second edition adds new material on nonparametric regression, response surface maximization, and longitudinal models. The book provides a unified approach to these disparate subject and serves as a self-contained companion volume to the author's Plane Answers to Complex Questions: The Theory of Linear Models. Ronald Christensen is Professor of Statistics at the University of New Mexico. He is well known for his work on the theory and application of linear models having linear structure. He is the author of numerous technical articles and several books and he is a Fellow of the American Statistical Association and the Institute of Mathematical Statistics. Also Available: Christensen, Ronald. Plane Answers to Complex Questions: The Theory of Linear Models, Second Edition (1996). New York: Springer-Verlag New York, Inc. Christensen, Ronald. Log-Linear Models and Logistic Regression, Second Edition (1997). New York: Springer-Verlag New York, Inc.

Model-Free Prediction and Regression - A Transformation-Based Approach to Inference (Hardcover, 1st ed. 2015): Dimitris N.... Model-Free Prediction and Regression - A Transformation-Based Approach to Inference (Hardcover, 1st ed. 2015)
Dimitris N. Politis
R3,041 Discovery Miles 30 410 Ships in 10 - 15 working days

The Model-Free Prediction Principle expounded upon in this monograph is based on the simple notion of transforming a complex dataset to one that is easier to work with, e.g., i.i.d. or Gaussian. As such, it restores the emphasis on observable quantities, i.e., current and future data, as opposed to unobservable model parameters and estimates thereof, and yields optimal predictors in diverse settings such as regression and time series. Furthermore, the Model-Free Bootstrap takes us beyond point prediction in order to construct frequentist prediction intervals without resort to unrealistic assumptions such as normality. Prediction has been traditionally approached via a model-based paradigm, i.e., (a) fit a model to the data at hand, and (b) use the fitted model to extrapolate/predict future data. Due to both mathematical and computational constraints, 20th century statistical practice focused mostly on parametric models. Fortunately, with the advent of widely accessible powerful computing in the late 1970s, computer-intensive methods such as the bootstrap and cross-validation freed practitioners from the limitations of parametric models, and paved the way towards the `big data' era of the 21st century. Nonetheless, there is a further step one may take, i.e., going beyond even nonparametric models; this is where the Model-Free Prediction Principle is useful. Interestingly, being able to predict a response variable Y associated with a regressor variable X taking on any possible value seems to inadvertently also achieve the main goal of modeling, i.e., trying to describe how Y depends on X. Hence, as prediction can be treated as a by-product of model-fitting, key estimation problems can be addressed as a by-product of being able to perform prediction. In other words, a practitioner can use Model-Free Prediction ideas in order to additionally obtain point estimates and confidence intervals for relevant parameters leading to an alternative, transformation-based approach to statistical inference.

The Practice of Econometric Theory - An Examination of the Characteristics of Econometric Computation (Hardcover, 2009 ed.):... The Practice of Econometric Theory - An Examination of the Characteristics of Econometric Computation (Hardcover, 2009 ed.)
Charles G. Renfro
R4,183 Discovery Miles 41 830 Ships in 18 - 22 working days

Econometric theory, as presented in textbooks and the econometric literature generally, is a somewhat disparate collection of findings. Its essential nature is to be a set of demonstrated results that increase over time, each logically based on a specific set of axioms or assumptions, yet at every moment, rather than a finished work, these inevitably form an incomplete body of knowledge. The practice of econometric theory consists of selecting from, applying, and evaluating this literature, so as to test its applicability and range. The creation, development, and use of computer software has led applied economic research into a new age. This book describes the history of econometric computation from 1950 to the present day, based upon an interactive survey involving the collaboration of the many econometricians who have designed and developed this software. It identifies each of the econometric software packages that are made available to and used by economists and econometricians worldwide.

From Particle Systems to Partial Differential Equations III - Particle Systems and PDEs III, Braga, Portugal, December 2014... From Particle Systems to Partial Differential Equations III - Particle Systems and PDEs III, Braga, Portugal, December 2014 (Hardcover, 1st ed. 2016)
Patricia Goncalves, Ana Jacinta Soares
R5,078 R4,757 Discovery Miles 47 570 Save R321 (6%) Ships in 10 - 15 working days

The main focus of this book is on different topics in probability theory, partial differential equations and kinetic theory, presenting some of the latest developments in these fields. It addresses mathematical problems concerning applications in physics, engineering, chemistry and biology that were presented at the Third International Conference on Particle Systems and Partial Differential Equations, held at the University of Minho, Braga, Portugal in December 2014. The purpose of the conference was to bring together prominent researchers working in the fields of particle systems and partial differential equations, providing a venue for them to present their latest findings and discuss their areas of expertise. Further, it was intended to introduce a vast and varied public, including young researchers, to the subject of interacting particle systems, its underlying motivation, and its relation to partial differential equations. This book will appeal to probabilists, analysts and those mathematicians whose work involves topics in mathematical physics, stochastic processes and differential equations in general, as well as those physicists whose work centers on statistical mechanics and kinetic theory.

High Dimensional Probability III (Hardcover, 2003 ed.): Joergen Hoffmann-Joergensen, Michael B. Marcus, Jon A. Wellner High Dimensional Probability III (Hardcover, 2003 ed.)
Joergen Hoffmann-Joergensen, Michael B. Marcus, Jon A. Wellner
R2,842 Discovery Miles 28 420 Ships in 18 - 22 working days

The title High Dimensional Probability is an attempt to describe the many trib utaries of research on Gaussian processes and probability in Banach spaces that started in the early 1970's. In each of these fields it is necessary to consider large classes of stochastic processes under minimal conditions. There are rewards in re search of this sort. One can often gain deep insights, even about familiar processes, by stripping away details that in hindsight turn out to be extraneous. Many of the problems that motivated researchers in the 1970's were solved. But the powerful new tools created for their solution, such as randomization, isoperimetry, concentration of measure, moment and exponential inequalities, chaining, series representations and decoupling turned out to be applicable to other important areas of probability. They led to significant advances in the study of empirical processes and other topics in theoretical statistics and to a new ap proach to the study of aspects of Levy processes and Markov processes in general. Papers on these topics as well as on the continuing study of Gaussian processes and probability in Banach are included in this volume."

Computational Intelligence, Optimization and Inverse Problems with Applications in Engineering (Hardcover, 1st ed. 2019):... Computational Intelligence, Optimization and Inverse Problems with Applications in Engineering (Hardcover, 1st ed. 2019)
Gustavo Mendes Platt, Xin-She Yang, Antonio Jose Silva Neto
R2,687 Discovery Miles 26 870 Ships in 18 - 22 working days

This book focuses on metaheuristic methods and its applications to real-world problems in Engineering. The first part describes some key metaheuristic methods, such as Bat Algorithms, Particle Swarm Optimization, Differential Evolution, and Particle Collision Algorithms. Improved versions of these methods and strategies for parameter tuning are also presented, both of which are essential for the practical use of these important computational tools. The second part then applies metaheuristics to problems, mainly in Civil, Mechanical, Chemical, Electrical, and Nuclear Engineering. Other methods, such as the Flower Pollination Algorithm, Symbiotic Organisms Search, Cross-Entropy Algorithm, Artificial Bee Colonies, Population-Based Incremental Learning, Cuckoo Search, and Genetic Algorithms, are also presented. The book is rounded out by recently developed strategies, or hybrid improved versions of existing methods, such as the Lightning Optimization Algorithm, Differential Evolution with Particle Collisions, and Ant Colony Optimization with Dispersion - state-of-the-art approaches for the application of computational intelligence to engineering problems. The wide variety of methods and applications, as well as the original results to problems of practical engineering interest, represent the primary differentiation and distinctive quality of this book. Furthermore, it gathers contributions by authors from four countries - some of which are the original proponents of the methods presented - and 18 research centers around the globe.

Statistical Tools for Measuring Agreement (Hardcover, 2012): Lawrence Lin, A.S. Hedayat, Wenting Wu Statistical Tools for Measuring Agreement (Hardcover, 2012)
Lawrence Lin, A.S. Hedayat, Wenting Wu
R1,405 Discovery Miles 14 050 Ships in 18 - 22 working days

Agreement assessment techniques are widely used in examining the acceptability of a new or generic process, methodology and/or formulation in areas of lab performance, instrument/assay validation or method comparisons, statistical process control, goodness-of-fit, and individual bioequivalence. Successful applications in these situations require a sound understanding of both the underlying theory and methodological advances in handling real-life problems. This book seeks to effectively blend theory and applications while presenting readers with many practical examples. For instance, in the medical device environment, it is important to know if the newly established lab can reproduce the instrument/assay results from the established but outdating lab. When there is a disagreement, it is important to differentiate the sources of disagreement. In addition to agreement coefficients, accuracy and precision coefficients are introduced and utilized to characterize these sources. This book will appeal to a broad range of statisticians, researchers, practitioners and students, in areas of biomedical devices, psychology, medical research, and others, in which agreement assessment are needed. Many practical illustrative examples will be presented throughout the book in a wide variety of situations for continuous and categorical data.

Bayesian Inference - Data Evaluation and Decisions (Hardcover, 2nd ed. 2016): Hanns Ludwig Harney Bayesian Inference - Data Evaluation and Decisions (Hardcover, 2nd ed. 2016)
Hanns Ludwig Harney
R4,326 Discovery Miles 43 260 Ships in 10 - 15 working days

This new edition offers a comprehensive introduction to the analysis of data using Bayes rule. It generalizes Gaussian error intervals to situations in which the data follow distributions other than Gaussian. This is particularly useful when the observed parameter is barely above the background or the histogram of multiparametric data contains many empty bins, so that the determination of the validity of a theory cannot be based on the chi-squared-criterion. In addition to the solutions of practical problems, this approach provides an epistemic insight: the logic of quantum mechanics is obtained as the logic of unbiased inference from counting data. New sections feature factorizing parameters, commuting parameters, observables in quantum mechanics, the art of fitting with coherent and with incoherent alternatives and fitting with multinomial distribution. Additional problems and examples help deepen the knowledge. Requiring no knowledge of quantum mechanics, the book is written on introductory level, with many examples and exercises, for advanced undergraduate and graduate students in the physical sciences, planning to, or working in, fields such as medical physics, nuclear physics, quantum mechanics, and chaos.

Analysis of Queueing Networks with Blocking (Hardcover): Simonetta Balsamo, Vittoria de Nitto Persone, Raif Onvural Analysis of Queueing Networks with Blocking (Hardcover)
Simonetta Balsamo, Vittoria de Nitto Persone, Raif Onvural
R2,797 Discovery Miles 27 970 Ships in 18 - 22 working days

Queueing network models have been widely applied as a powerful tool for modelling, performance evaluation, and prediction of discrete flow systems, such as computer systems, communication networks, production lines, and manufacturing systems. Queueing network models with finite capacity queues and blocking have been introduced and applied as even more realistic models of systems with finite capacity resources and with population constraints. In recent years, research in this field has grown rapidly. Analysis of Queueing Networks with Blocking introduces queueing network models with finite capacity and various types of blocking mechanisms. It gives a comprehensive definition of the analytical model underlying these blocking queueing networks. It surveys exact and approximate analytical solution methods and algorithms and their relevant properties. It also presents various application examples of queueing networks to model computer systems and communication networks. This book is organized in three parts. Part I introduces queueing networks with blocking and various application examples. Part II deals with exact and approximate analysis of queueing networks with blocking and the condition under which the various techniques can be applied. Part III presents a review of various properties of networks with blocking, describing several equivalence properties both between networks with and without blocking and between different blocking types. Approximate solution methods for the buffer allocation problem are presented.

Statistical Science in the Courtroom (Hardcover, and and and): Joseph L Gastwirth Statistical Science in the Courtroom (Hardcover, and and and)
Joseph L Gastwirth
R3,415 Discovery Miles 34 150 Ships in 18 - 22 working days

Expert testimony relying on scientific and other specialized evidence has come under increased scrutiny by the legal system. A trilogy of recent U.S. Supreme Court cases has assigned judges the task of assessing the relevance and reliability of proposed expert testimony. In conjunction with the Federal judiciary, the American Association for the Advancement of Science has initiated a project to provide judges indicating a need with their own expert. This concern with the proper interpretation of scientific evidence, especially that of a probabilistic nature, has also occurred in England, Australia and in several European countries. Statistical Science in the Courtroom is a collection of articles written by statisticians and legal scholars who have been concerned with problems arising in the use of statistical evidence. A number of articles describe DNA evidence and the difficulties of properly calculating the probability that a random individual's profile would "match" that of the evidence as well as the proper way to intrepret the result. In addition to the technical issues, several authors tell about their experiences in court. A few have become disenchanted with their involvement and describe the events that led them to devote less time to this application. Other articles describe the role of statistical evidence in cases concerning discrimination against minorities, product liability, environmental regulation, the appropriateness and fairness of sentences and how being involved in legal statistics has raised interesting statistical problems requiring further research.

Mathematical and Statistical Methods for Genetic Analysis (Hardcover, 2nd ed. 2002. 2nd corr. printing 2003): Kenneth Lange Mathematical and Statistical Methods for Genetic Analysis (Hardcover, 2nd ed. 2002. 2nd corr. printing 2003)
Kenneth Lange
R3,315 Discovery Miles 33 150 Ships in 18 - 22 working days

During the past decade, geneticists have cloned scores of Mendelian disease genes and constructed a rough draft of the entire human genome. The unprecedented insights into human disease and evolution offered by mapping, cloning, and sequencing will transform medicine and agriculture. This revolution depends vitally on the contributions of applied mathematicians, statisticians, and computer scientists. Mathematical and Statistical Methods for Genetic Analysis is written to equip students in the mathematical sciences to understand and model the epidemiological and experimental data encountered in genetics research. Mathematical, statistical, and computational principles relevant to this task are developed hand in hand with applications to population genetics, gene mapping, risk prediction, testing of epidemiological hypotheses, molecular evolution, and DNA sequence analysis. Many specialized topics are covered that are currently accessible only in journal articles. This second edition expands the original edition by over 100 pages and includes new material on DNA sequence analysis, diffusion processes, binding domain identification, Bayesian estimation of haplotype frequencies, case-control association studies, the gamete competition model, QTL mapping and factor analysis, the Lander-Green-Kruglyak algorithm of pedigree analysis, and codon and rate variation models in molecular phylogeny. Sprinkled throughout the chapters are many new problems. Kenneth Lange is Professor of Biomathematics and Human Genetics at the UCLA School of Medicine. At various times during his career, he has held appointments at the University of New Hampshire, MIT, Harvard, and the University of Michigan. While at the University of Michigan, he was the Pharmacia & Upjohn Foundation Professor of Biostatistics. His research interests include human genetics, population modeling, biomedical imaging, computational statistics, and applied stochastic processes. Springer-Verlag published his book Numerical Analysis for Statisticians in 1999.

Semi-Markov Processes and Reliability (Hardcover, 2001 ed.): N. Limnios, G. Oprisan Semi-Markov Processes and Reliability (Hardcover, 2001 ed.)
N. Limnios, G. Oprisan
R2,686 Discovery Miles 26 860 Ships in 18 - 22 working days

The theory of stochastic processes, for science and engineering, can be considered as an extension of probability theory allowing modeling of the evolution of systems over time. The modern theory of Markov processes has its origins in the studies of A.A. Markov (1856-1922) on sequences of experiments "connected in a chain" and in the attempts to describe mathematically the physical phenomenon Brownian motion. The theory of stochastic processes entered in a period of intensive development when the idea of Markov property was brought in. This book is a modern overall view of semi-Markov processes and its applications in reliability. It is accessible to readers with a first course in Probability theory (including the basic notions of Markov chain). The text contains many examples which aid in the understanding of the theoretical notions and shows how to apply them to concrete physical situations including algorithmic simulations. Many examples of the concrete applications in reliability are given. Features: * Processes associated to semi-Markov kernel for general and discrete state spaces * Asymptotic theory of processes and of additive functionals * Statistical estimation of semi-Markov kernel and of reliability function * Monte Carlo simulation * Applications in reliability and maintenance The book is a valuable resource for understanding the latest developments in Semi-Markov Processes and reliability. Practitioners, researchers and professionals in applied mathematics, control and engineering who work in areas of reliability, lifetime data analysis, statistics, probability, and engineering will find this book an up-to-date overview of the field.

Bayesian Hierarchical Space-Time Models with Application to Significant Wave Height (Hardcover, 1st ed. 2013): Erik Vanem Bayesian Hierarchical Space-Time Models with Application to Significant Wave Height (Hardcover, 1st ed. 2013)
Erik Vanem; Foreword by Elzbieta Maria Bitner-Gregersen, Christopher K. Wikle
R2,699 R1,933 Discovery Miles 19 330 Save R766 (28%) Ships in 10 - 15 working days

This book provides an example of a thorough statistical treatment of ocean wave data in space and time. It demonstrates how the flexible framework of Bayesian hierarchical space-time models can be applied to oceanographic processes such as significant wave height in order to describe dependence structures and uncertainties in the data.

This monograph is a research book and it is partly cross-disciplinary. The methodology itself is firmly rooted in the statistical research tradition, based on probability theory and stochastic processes. However, that methodology has been applied to a problem in the field of physical oceanography, analyzing data for significant wave height, which is of crucial importance to ocean engineering disciplines. Indeed, the statistical properties of significant wave height are important for the design, construction and operation of ships and other marine and coastal structures. Furthermore, the book addresses the question of whether climate change has an effect of the ocean wave climate, and if so what that effect might be. Thus, this book is an important contribution to the ongoing debate on climate change, its implications and how to adapt to a changing climate, with a particular focus on the maritime industries and the marine environment.

This book should be of value to anyone with an interest in the statistical modelling of environmental processes, and in particular to those with an interest in the ocean wave climate. It is written on a level that should be understandable to everyone with a basic background in statistics or elementary mathematics, and an introduction to some basic concepts is provided in the appendices for the uninitiated reader. The intended readership includes students and professionals involved in statistics, oceanography, ocean engineering, environmental research, climate sciences and risk assessment. Moreover, the book s findings are relevant for various stakeholders in the maritime industries such as design offices, classification societies, ship owners, yards and operators, flag states and intergovernmental agencies such as the IMO."

Fuzzy Statistics (Hardcover, 2004 ed.): James J Buckley Fuzzy Statistics (Hardcover, 2004 ed.)
James J Buckley
R2,745 Discovery Miles 27 450 Ships in 18 - 22 working days

1. 1 Introduction This book is written in four major divisions. The first part is the introductory chapters consisting of Chapters 1 and 2. In part two, Chapters 3-11, we develop fuzzy estimation. For example, in Chapter 3 we construct a fuzzy estimator for the mean of a normal distribution assuming the variance is known. More details on fuzzy estimation are in Chapter 3 and then after Chapter 3, Chapters 4-11 can be read independently. Part three, Chapters 12- 20, are on fuzzy hypothesis testing. For example, in Chapter 12 we consider the test Ho : /1 = /10 verses HI : /1 f=- /10 where /1 is the mean of a normal distribution with known variance, but we use a fuzzy number (from Chapter 3) estimator of /1 in the test statistic. More details on fuzzy hypothesis testing are in Chapter 12 and then after Chapter 12 Chapters 13-20 may be read independently. Part four, Chapters 21-27, are on fuzzy regression and fuzzy prediction. We start with fuzzy correlation in Chapter 21. Simple linear regression is the topic in Chapters 22-24 and Chapters 25-27 concentrate on multiple linear regression. Part two (fuzzy estimation) is used in Chapters 22 and 25; and part 3 (fuzzy hypothesis testing) is employed in Chapters 24 and 27. Fuzzy prediction is contained in Chapters 23 and 26. A most important part of our models in fuzzy statistics is that we always start with a random sample producing crisp (non-fuzzy) data.

Multiple Decrement Models in Insurance - An Introduction Using R (Hardcover, 2012 ed.): Shailaja Rajendra Deshmukh Multiple Decrement Models in Insurance - An Introduction Using R (Hardcover, 2012 ed.)
Shailaja Rajendra Deshmukh
R1,420 Discovery Miles 14 200 Ships in 18 - 22 working days

The book will serve as a guide to many actuarial concepts and statistical techniques in multiple decrement models and their application in calculation of premiums and reserves in life insurance products with riders and in pension and employee benefit plans as in these schemes, the benefit paid on termination of employment depends upon the several causes of termination. Multiple state models are discussed to accommodate the insurance products in which the payment of benefits or premiums is dependent on being in a given state or moving between a given pair of states at a given time, for example, disability income insurance model. The book also discusses stochastic models for interest rates and calculation of premiums for some products in this set up. The highlight of the book is usage of R software, freely available from public domain, for computations of various monetary functions involved in insurance business. R commands are given for all the computations."

Elliptically Contoured Models in Statistics and Portfolio Theory (Hardcover, 2nd ed. 2013): Arjun K Gupta, Tamas Varga, Taras... Elliptically Contoured Models in Statistics and Portfolio Theory (Hardcover, 2nd ed. 2013)
Arjun K Gupta, Tamas Varga, Taras Bodnar
R1,998 Discovery Miles 19 980 Ships in 10 - 15 working days

Elliptically Contoured Models in Statistics and Portfolio Theory fully revises the first detailed introduction to the theory of matrix variate elliptically contoured distributions. There are two additional chapters, and all the original chapters of this classic text have been updated. Resources in this book will be valuable for researchers, practitioners, and graduate students in statistics and related fields of finance and engineering. Those interested in multivariate statistical analysis and its application to portfolio theory will find this text immediately useful. In multivariate statistical analysis, elliptical distributions have recently provided an alternative to the normal model. Elliptical distributions have also increased their popularity in finance because of the ability to model heavy tails usually observed in real data. Most of the work, however, is spread out in journals throughout the world and is not easily accessible to the investigators. A noteworthy function of this book is the collection of the most important results on the theory of matrix variate elliptically contoured distributions that were previously only available in the journal-based literature. The content is organized in a unified manner that can serve an a valuable introduction to the subject. "

Dynamics and Randomness (Hardcover, 2002 ed.): Alejandro Maass, Servet Martinez, Jaime San Martin Dynamics and Randomness (Hardcover, 2002 ed.)
Alejandro Maass, Servet Martinez, Jaime San Martin
R2,806 Discovery Miles 28 060 Ships in 18 - 22 working days

This book contains the lectures given at the Conference on Dynamics and Randomness held at the Centro de Modelamiento Matematico of the Universidad de Chile from December 11th to 15th, 2000. This meeting brought together mathematicians, theoretical physicists and theoretical computer scientists, and graduate students interested in fields re lated to probability theory, ergodic theory, symbolic and topological dynam ics. We would like to express our gratitude to all the participants of the con ference and to the people who contributed to its organization. In particular, to Pierre Collet, Bernard Host and Mike Keane for their scientific advise. VVe want to thank especially the authors of each chapter for their well prepared manuscripts and the stimulating conferences they gave at Santiago. We are also indebted to our sponsors and supporting institutions, whose interest and help was essential to organize this meeting: ECOS-CONICYT, FONDAP Program in Applied Mathematics, French Cooperation, Fundacion Andes, Presidential Fellowship and Universidad de Chile. We are grateful to Ms. Gladys Cavallone for their excellent work during the preparation of the meeting as well as for the considerable task of unifying the typography of the different chapters of this book."

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Emerging Technologies of Text Mining…
Hercules Antonio do Prado, Edilson Ferneda Hardcover R4,592 Discovery Miles 45 920
Predictive Filtering for Microsatellite…
Lu Cao, Xiaoqian Chen, … Paperback R2,819 Discovery Miles 28 190
The Hitchhiker's Guide To AI - A…
Arthur Goldstuck Paperback R505 Discovery Miles 5 050
Dialogues with Social Robots…
Kristiina Jokinen, Graham Wilcock Hardcover R5,888 Discovery Miles 58 880
Next-Generation Applications and…
Filipe Portela, Ricardo Queiros Hardcover R6,648 Discovery Miles 66 480
Handbook of Research on Big Data Storage…
Richard S Segall, Jeffrey S Cook Hardcover R8,491 Discovery Miles 84 910
Intelligent Data Security Solutions for…
Amit Kumar Singh, Mohamed Elhoseny Paperback R2,640 Discovery Miles 26 400
Information Systems Engineering - From…
Paul Johannesson, Eva Soderstrom Hardcover R2,631 Discovery Miles 26 310
Data Analytics for Social Microblogging…
Soumi Dutta, Asit Kumar Das, … Paperback R3,335 Discovery Miles 33 350
Studying and Designing Technology for…
Tejinder Judge, Carman Neustaedter Paperback R1,382 R1,304 Discovery Miles 13 040

 

Partners