![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
This book provides an up-to-date introduction to information theory. In addition to the classical topics discussed, it provides the first comprehensive treatment of the theory of I-Measure, network coding theory, Shannon and non-Shannon type information inequalities, and a relation between entropy and group theory. ITIP, a software package for proving information inequalities, is also included. With a large number of examples, illustrations, and original problems, this book is excellent as a textbook or reference book for a senior or graduate level course on the subject, as well as a reference for researchers in related fields.
This book focuses on dealing with large-scale data, a field
commonly referred to as data mining. The book is divided into three
sections. The first deals with an introduction to statistical
aspects of data mining and machine learning and includes
applications to text analysis, computer intrusion detection, and
hiding of information in digital files. The second section focuses
on a variety of statistical methodologies that have proven to be
effective in data mining applications. These include clustering,
classification, multivariate density estimation, tree-based
methods, pattern recognition, outlier detection, genetic
algorithms, and dimensionality reduction. The third section focuses
on data visualization and covers issues of visualization of
high-dimensional data, novel graphical techniques with a focus on
human factors, interactive graphics, and data visualization using
virtual reality. This book represents a thorough cross section of
internationally renowned thinkers who are inventing methods for
dealing with a new data paradigm.
The development of Operations Research (OR) requires constant improvements, such as the integration of research results with business applications and innovative educational practice. The full deployment and commercial exploitation of goods and services generally need the construction of strong synergies between educational institutions and businesses. The IO2015 -XVII Congress of APDIO aims at strengthening the knowledge triangle in education, research and innovation, in order to maximize the contribution of OR for sustainable growth, the promoting of a knowledge-based economy, and the smart use of finite resources. The IO2015-XVII Congress of APDIO is a privileged meeting point for the promotion and dissemination of OR and related disciplines, through the exchange of ideas among teachers, researchers, students , and professionals with different background, but all sharing a common desire that is the development of OR.
Markov random field (MRF) theory provides a basis for modeling contextual constraints in visual processing and interpretation. It enables us to develop optimal vision algorithms systematically when used with optimization principles. This book presents a comprehensive study on the use of MRFs for solving computer vision problems. Various vision models are presented in a unified framework, including image restoration and reconstruction, edge and region segmentation, texture, stereo and motion, object matching and recognition, and pose estimation. This third edition includes the most recent advances and has new and expanded sections on topics such as: Bayesian Network; Discriminative Random Fields; Strong Random Fields; Spatial-Temporal Models; Learning MRF for Classification. This book is an excellent reference for researchers working in computer vision, image processing, statistical pattern recognition and applications of MRFs. It is also suitable as a text for advanced courses in these areas.
This book promotes and describes the application of objective and effective decision making in asset management based on mathematical models and practical techniques that can be easily implemented in organizations. This comprehensive and timely publication will be an essential reference source, building on available literature in the field of asset management while laying the groundwork for further research breakthroughs in this field. The text provides the resources necessary for managers, technology developers, scientists and engineers to adopt and implement better decision making based on models and techniques that contribute to recognizing risks and uncertainties and, in general terms, to the important role of asset management to increase competitiveness in organizations.
This book provides a fresh approach to reliability theory, an area that has gained increasing relevance in fields from statistics and engineering to demography and insurance. Its innovative use of quantile functions gives an analysis of lifetime data that is generally simpler, more robust, and more accurate than the traditional methods, and opens the door for further research in a wide variety of fields involving statistical analysis. In addition, the book can be used to good effect in the classroom as a text for advanced undergraduate and graduate courses in Reliability and Statistics.
Essential Statistical Methods for Medical Statistics presents only key contributions which have been selected from the volume in the Handbook of Statistics: Medical Statistics, Volume 27 (2009). While the use of statistics in these fields has a long and rich
history, the explosive growth of science in general, and of
clinical and epidemiological sciences in particular, has led to the
development of new methods and innovative adaptations of standard
methods. This volume is appropriately focused for individuals
working in these fields. Contributors are internationally renowned
experts in their respective areas. . Contributors are internationally renowned experts in their respective areas . Addresses emerging statistical challenges in epidemiological, biomedical, and pharmaceutical research . Methods for assessing Biomarkers, analysis of competing risks . Clinical trials including sequential and group sequential, crossover designs, cluster randomized, and adaptive designs . Structural equations modelling and longitudinal data analysis"
Miller and Childers have focused on creating a clear presentation
of foundational concepts with specific applications to signal
processing and communications, clearly the two areas of most
interest to students and instructors in this course. It is aimed at
graduate students as well as practicing engineers, and includes
unique chapters on narrowband random processes and simulation
techniques.
This book presents the state of the art in multilevel analysis, with an emphasis on more advanced topics. These topics are discussed conceptually, analyzed mathematically, and illustrated by empirical examples. Multilevel analysis is the statistical analysis of hierarchically and non-hierarchically nested data. The simplest example is clustered data, such as a sample of students clustered within schools. Multilevel data are especially prevalent in the social and behavioral sciences and in the biomedical sciences. The chapter authors are all leading experts in the field. Given the omnipresence of multilevel data in the social, behavioral, and biomedical sciences, this book is essential for empirical researchers in these fields.
Through refereed papers, this volume focuses on the foundations of the Bayesian paradigm; their comparison to objectivistic or frequentist Statistics counterparts; and the appropriate application of Bayesian foundations. This research in Bayesian Statistics is applicable to data analysis in biostatistics, clinical trials, law, engineering, and the social sciences. EBEB, the Brazilian Meeting on Bayesian Statistics, is held every two years by the ISBrA, the International Society for Bayesian Analysis, one of the most active chapters of the ISBA. The 12th meeting took place March 10-14, 2014 in Atibaia. Interest in foundations of inductive Statistics has grown recently in accordance with the increasing availability of Bayesian methodological alternatives. Scientists need to deal with the ever more difficult choice of the optimal method to apply to their problem. This volume shows how Bayes can be the answer. The examination and discussion on the foundations work towards the goal of proper application of Bayesian methods by the scientific community. Individual papers range in focus from posterior distributions for non-dominated models, to combining optimization and randomization approaches for the design of clinical trials, and classification of archaeological fragments with Bayesian networks.
This book reports a literature review on kaizen, its industrial applications, critical success factors, benefits gained, journals that publish about it, main authors (research groups) and universities. Kaizen is treated in this book in three stages: planning, implementation and control. The authors provide a questionnaire designed with activities in every stage, highlighting the benefits gained in each stage. The study has been applied to more than 400 managers and leaders in continuous improvement in Mexican maquiladoras. A univariate analysis is provided to the activities in every stage. Moreover, structural equation models associating those activities with the benefits gained are presented for a statistical validation. Such a relationship between activities and benefits helps managers to identify the most important factor affecting their benefits and financial income.
When no samples are available to estimate a probability distribution, we have to invite some domain experts to evaluate the belief degree that each event will happen. Perhaps some people think that the belief degree should be modeled by subjective probability or fuzzy set theory. However, it is usually inappropriate because both of them may lead to counterintuitive results in this case. In order to rationally deal with belief degrees, uncertainty theory was founded in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling belief degrees. This is an introductory textbook on uncertainty theory, uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain set, uncertain logic, uncertain inference, uncertain process, uncertain calculus, and uncertain differential equation. This textbook also shows applications of uncertainty theory to scheduling, logistics, networks, data mining, control, and finance.
This monograph considers the evaluation and expression of measurement uncertainty within the mathematical framework of the Theory of Evidence. With a new perspective on the metrology science, the text paves the way for innovative applications in a wide range of areas. Building on Simona Salicone's Measurement Uncertainty: An Approach via the Mathematical Theory of Evidence, the material covers further developments of the Random Fuzzy Variable (RFV) approach to uncertainty and provides a more robust mathematical and metrological background to the combination of measurement results that leads to a more effective RFV combination method. While the first part of the book introduces measurement uncertainty, the Theory of Evidence, and fuzzy sets, the following parts bring together these concepts and derive an effective methodology for the evaluation and expression of measurement uncertainty. A supplementary downloadable program allows the readers to interact with the proposed approach by generating and combining RFVs through custom measurement functions. With numerous examples of applications, this book provides a comprehensive treatment of the RFV approach to uncertainty that is suitable for any graduate student or researcher with interests in the measurement field.
This book brings together historical notes, reviews of research developments, fresh ideas on how to make VC (Vapnik-Chervonenkis) guarantees tighter, and new technical contributions in the areas of machine learning, statistical inference, classification, algorithmic statistics, and pattern recognition. The contributors are leading scientists in domains such as statistics, mathematics, and theoretical computer science, and the book will be of interest to researchers and graduate students in these domains.
Most of the time series analysis methods applied today rely heavily on the key assumptions of linearity, Gaussianity and stationarity. Natural time series, including hydrologic, climatic and environmental time series, which satisfy these assumptions seem to be the exception rather than the rule. Nevertheless, most time series analysis is performed using standard methods after relaxing the required conditions one way or another, in the hope that the departure from these assumptions is not large enough to affect the result of the analysis. A large amount of data is available today after almost a century of intensive data collection of various natural time series. In addition to a few older data series such as sunspot numbers, sea surface temperatures, etc., data obtained through dating techniques (tree-ring data, ice core data, geological and marine deposits, etc.), are available. With the advent of powerful computers, the use of simplified methods can no longer be justified, especially with the success of these methods in explaining the inherent variability in natural time series. This book presents a number of new techniques that have been discussed in the literature during the last two decades concerning the investigation of stationarity, linearity and Gaussianity of hydrologic and environmental times series. These techniques cover different approaches for assessing nonstationarity, ranging from time domain analysis, to frequency domain analysis, to the combined time-frequency and time-scale analyses, to segmentation analysis, in addition to formal statistical tests of linearity and Gaussianity. It is hoped that this endeavor would facilitate further research into this important area.
Elements of Large Sample Theory provides a unified treatment of first-order large-sample theory. It discusses a broad range of applications including introductions to density estimation, the bootstrap, and the asymptotics of survey methodology written at an elementary level. The book is suitable for students at the Master's level in statistics and in aplied fields who have a background of two years of calculus. E.L. Lehmann is Professor of Statistics Emeritus at the University of California, Berkeley. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences, and the recipient of honorary degrees from the University of Leiden, The Netherlands, and the University of Chicago. Also available: E.L. Lehmann and George Casella, Theory at Point Estimation, Second Edition. Springer-Verlag New York, Inc., 1998, 640 pp., Cloth, ISBN 0-387-98502-6. E.L. Lehmann, Testing Statistical Hypotheses, Second Edition. Springer-Verlag New York, Inc., 1997, 624 pp., Cloth, ISBN 0-387-94919-4.
The theory of random processes is an integral part of the analysis and synthesis of complex engineering systems. This textbook systematically presents the fundamentals of statistical dynamics and reliability theory. The theory of Markovian processes used during the analysis of random dynamic processes in mechanical systems is described in detail. Examples are machines, instruments and structures loaded with perturbations. The reliability and lifetime of those objects depend on how properly these perturbations are taken into account. Random vibrations with finite and infinite numbers of degrees of freedom are analyzed as well as the theory and numerical methods of non-stationary processes under the conditions of statistical indeterminacy. This textbook is addressed to students and post-graduates of technical universities. It can also be useful to lecturers and mechanical engineers, including designers in different industries.
This book is a useful overview of results in multivariate probability distributions and multivariate analysis as well as a reference to harmonic analysis on symmetric cones adapted to the needs of researchers in analysis and probability theory.
This undergraduate text distils the wisdom of an experienced
teacher and yields, to the mutual advantage of students and their
instructors, a sound and stimulating introduction to probability
theory. The accent is on its essential role in statistical theory
and practice, built on the use of illustrative examples and the
solution of problems from typical examination papers.
Mathematically-friendly for first and second year undergraduate
students, the book is also a reference source for workers in a wide
range of disciplines who are aware that even the simpler aspects of
probability theory are not simple.
This relevant and timely thesis presents the pioneering use of risk-based assessment tools to analyse the interaction between electrical and mechanical systems in mixed AC/DC power networks at subsynchronous frequencies. It also discusses assessing the effect of uncertainties in the mechanical parameters of a turbine generator on SSR in a meshed network with both symmetrical and asymmetrical compensation systems. The research presented has resulted in 12 publications including three top international journal papers (IEEE Transactions on Power Systems) and nine international conference publications, including two award-winning papers.
This book provides a self-contained review of all the relevant topics in probability theory. A software package called MAXIM, which runs on MATLAB, is made available for downloading. Vidyadhar G. Kulkarni is Professor of Operations Research at the University of North Carolina at Chapel Hill.
The primary purpose of this textbook is to introduce the reader to a wide variety of elementary permutation statistical methods. Permutation methods are optimal for small data sets and non-random samples, and are free of distributional assumptions. The book follows the conventional structure of most introductory books on statistical methods, and features chapters on central tendency and variability, one-sample tests, two-sample tests, matched-pairs tests, one-way fully-randomized analysis of variance, one-way randomized-blocks analysis of variance, simple regression and correlation, and the analysis of contingency tables. In addition, it introduces and describes a comparatively new permutation-based, chance-corrected measure of effect size. Because permutation tests and measures are distribution-free, do not assume normality, and do not rely on squared deviations among sample values, they are currently being applied in a wide variety of disciplines. This book presents permutation alternatives to existing classical statistics, and is intended as a textbook for undergraduate statistics courses or graduate courses in the natural, social, and physical sciences, while assuming only an elementary grasp of statistics.
Mathematically, natural disasters of all types are characterized by heavy tailed distributions. The analysis of such distributions with common methods, such as averages and dispersions, can therefore lead to erroneous conclusions. The statistical methods described in this book avoid such pitfalls. Seismic disasters are studied, primarily thanks to the availability of an ample statistical database. New approaches are presented to seismic risk estimation and forecasting the damage caused by earthquakes, ranging from typical, moderate events to very rare, extreme disasters. Analysis of these latter events is based on the limit theorems of probability and the duality of the generalized Pareto distribution and generalized extreme value distribution. It is shown that the parameter most widely used to estimate seismic risk - Mmax, the maximum possible earthquake value - is potentially non-robust. Robust analogues of this parameter are suggested and calculated for some seismic catalogues. Trends in the costs inferred by damage from natural disasters as related to changing social and economic situations are examined for different regions. The results obtained argue for sustainable development, whereas entirely different, incorrect conclusions can be drawn if the specific properties of the heavy-tailed distribution and change in completeness of data on natural hazards are neglected. This pioneering work is directed at risk assessment specialists in general, seismologists, administrators and all those interested in natural disasters and their impact on society. |
You may like...
Statistics for Management and Economics
Gerald Keller, Nicoleta Gaciu
Paperback
Integrated Population Biology and…
Arni S.R. Srinivasa Rao, C.R. Rao
Hardcover
R6,219
Discovery Miles 62 190
Statistics For Business And Economics
David Anderson, James Cochran, …
Paperback
(1)
Big Data Analytics and Information…
Farouk Nathoo, Ejaz Ahmed
Hardcover
Fundamentals of Social Research Methods
Claire Bless, Craig Higson-Smith, …
Paperback
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
|