![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
Rapid technological advances in devices used for data collection
have led to the emergence of a new class of longitudinal data:
intensive longitudinal data (ILD). Behavioral scientific studies
now frequently utilize handheld computers, beepers, web interfaces,
and other technological tools for collecting many more data points
over time than previously possible. Other protocols, such as those
used in fMRI and monitoring of public safety, also produce ILD,
hence the statistical models in this volume are applicable to a
range of data. The volume features state-of-the-art statistical
modeling strategies developed by leading statisticians and
methodologists working on ILD in conjunction with behavioral
scientists. Chapters present applications from across the
behavioral and health sciences, including coverage of substantive
topics such as stress, smoking cessation, alcohol use, traffic
patterns, educational performance and intimacy.
The primary purpose of this textbook is to introduce the reader to a wide variety of elementary permutation statistical methods. Permutation methods are optimal for small data sets and non-random samples, and are free of distributional assumptions. The book follows the conventional structure of most introductory books on statistical methods, and features chapters on central tendency and variability, one-sample tests, two-sample tests, matched-pairs tests, one-way fully-randomized analysis of variance, one-way randomized-blocks analysis of variance, simple regression and correlation, and the analysis of contingency tables. In addition, it introduces and describes a comparatively new permutation-based, chance-corrected measure of effect size. Because permutation tests and measures are distribution-free, do not assume normality, and do not rely on squared deviations among sample values, they are currently being applied in a wide variety of disciplines. This book presents permutation alternatives to existing classical statistics, and is intended as a textbook for undergraduate statistics courses or graduate courses in the natural, social, and physical sciences, while assuming only an elementary grasp of statistics.
This volume investigates the notion of reduction. Building on the idea that philosophersemploy the term 'reduction' to reconcile diversity and directionality with unity, without relying on elimination, the book offers a powerful explication of an "ontological," notion of reduction the extension of which is (primarily) formed by properties, kinds, individuals, or processes. It argues that related notions of reduction, such as theory-reduction and functional reduction, should be defined in terms of this explication. Thereby, the book offers a coherent framework, which sheds light on the history of the various reduction debates in the philosophy of science and in the philosophy of mind, and on related topics such as reduction and unification, the notion of a scientific level, and physicalism. The book takes its point of departure in the examination of a puzzle about reduction. To illustrate, the book takes as an example the reduction of water. If water reduces to H2O, then water is identical to H2O - thus we get unity. Unity does not come at the price of elimination - claiming that water reduces to H2O, we do not thereby claim that there is no water. But what about diversity and directionality? Intuitively, there should be a difference between water and H2O, such that we get diversity. This is required for there to be directionality: in a sense, if water reduces to H2O, then H2O is prior to, or more basic than water. At least, if water reduces to H2O, then H2O does not reduce to water. But how can this be, if water is identical to H2O? The book shows that the application of current models of reduction does not solve this puzzle, and proposes a new coherent definition, according to which unity is tied to identity, diversity is descriptive in nature, and directionality is the directionality of explanation."
One of the main aims of this book is to exhibit some fruitful links between renewal theory and regular variation of functions. Applications of renewal processes play a key role in actuarial and financial mathematics as well as in engineering, operations research and other fields of applied mathematics. On the other hand, regular variation of functions is a property that features prominently in many fields of mathematics. The structure of the book reflects the historical development of the authors' research work and approach - first some applications are discussed, after which a basic theory is created, and finally further applications are provided. The authors present a generalized and unified approach to the asymptotic behavior of renewal processes, involving cases of dependent inter-arrival times. This method works for other important functionals as well, such as first and last exit times or sojourn times (also under dependencies), and it can be used to solve several other problems. For example, various applications in function analysis concerning Abelian and Tauberian theorems can be studied as well as those in studies of the asymptotic behavior of solutions of stochastic differential equations. The classes of functions that are investigated and used in a probabilistic context extend the well-known Karamata theory of regularly varying functions and thus are also of interest in the theory of functions. The book provides a rigorous treatment of the subject and may serve as an introduction to the field. It is aimed at researchers and students working in probability, the theory of stochastic processes, operations research, mathematical statistics, the theory of functions, analytic number theory and complex analysis, as well as economists with a mathematical background. Readers should have completed introductory courses in analysis and probability theory.
This volume features selected contributions on a variety of topics related to linear statistical inference. The peer-reviewed papers from the International Conference on Trends and Perspectives in Linear Statistical Inference (LinStat 2016) held in Istanbul, Turkey, 22-25 August 2016, cover topics in both theoretical and applied statistics, such as linear models, high-dimensional statistics, computational statistics, the design of experiments, and multivariate analysis. The book is intended for statisticians, Ph.D. students, and professionals who are interested in statistical inference.
This book was written for those who need to know how to collect, analyze and present data. It is meant to be a first course for practitioners, a book for private study or brush-up on statistics, and supplementary reading for general statistics classes. The book is untraditional, both with respect to the choice of topics and the presentation: Topics were determined by what is most useful for practical statistical work, and the presentation is as non-mathematical as possible. The book contains many examples using statistical functions in spreadsheets. In this second edition, new topics have been included e.g. within the area of statistical quality control, in order to make the book even more useful for practitioners working in industry.
Take your first steps into learning statistics, and understand the fascinating science of analysing data. Statistics: The Art and Science of Learning from Data, Global Edition, 5th edition by Agresti, Franklin, and Klingenberg is the ideal introduction to the discipline that will familiarise you with the world of statistics and data analysis. Ideal for students who study introductory courses in statistics, this text takes a conceptual approach and will encourage you to learn how to analyse data the right way by enquiring and searching for the right questions and information rather than just memorising procedures. Enjoyable and accessible, yet informative and without compromising the necessary rigour, this edition will help you engage with the science in modern life, delivering a learning experience that is effective in statistical thinking and practice. Key features include: Greater attention to the analysis of proportions compared to other introductory statistics texts. Introduction to key concepts, presenting the categorical data first, and quantitative data after. A wide variety of real-world data in the examples and exercises New sections and updated content will enhance your learning and understanding. Pearson MyLab (R) Students, if Pearson Pearson MyLab Statistics is a recommended/mandatory component of the course, please ask your instructor for the correct ISBN. Pearson MyLab Statistics should only be purchased when required by an instructor. Instructors, contact your Pearson representative for more information. This title is a Pearson Global Edition. The Editorial team at Pearson has worked closely with educators around the world to include content which is especially relevant to students outside the United States.
This book is a useful overview of results in multivariate probability distributions and multivariate analysis as well as a reference to harmonic analysis on symmetric cones adapted to the needs of researchers in analysis and probability theory.
This is a practical guide to solutions for forecasting demand for services and products in international markets - and much more than just a listing of dry theoretical methods. Leading experts present studies on improving methods for forecasting numbers of incoming patent filings at the European Patent Office. These are reviewed by practitioners of the existing methods, revealing that it may not always be wise to trust established regression approaches.
Mean field approximation has been adopted to describe macroscopic phenomena from microscopic overviews. It is still in progress; fluid mechanics, gauge theory, plasma physics, quantum chemistry, mathematical oncology, non-equilibirum thermodynamics. spite of such a wide range of scientific areas that are concerned with the mean field theory, a unified study of its mathematical structure has not been discussed explicitly in the open literature. The benefit of this point of view on nonlinear problems should have significant impact on future research, as will be seen from the underlying features of self-assembly or bottom-up self-organization which is to be illustrated in a unified way. The aim of this book is to formulate the variational and hierarchical aspects of the equations that arise in the mean field theory from macroscopic profiles to microscopic principles, from dynamics to equilibrium, and from biological models to models that arise from chemistry and physics.
Through refereed papers, this volume focuses on the foundations of the Bayesian paradigm; their comparison to objectivistic or frequentist Statistics counterparts; and the appropriate application of Bayesian foundations. This research in Bayesian Statistics is applicable to data analysis in biostatistics, clinical trials, law, engineering, and the social sciences. EBEB, the Brazilian Meeting on Bayesian Statistics, is held every two years by the ISBrA, the International Society for Bayesian Analysis, one of the most active chapters of the ISBA. The 12th meeting took place March 10-14, 2014 in Atibaia. Interest in foundations of inductive Statistics has grown recently in accordance with the increasing availability of Bayesian methodological alternatives. Scientists need to deal with the ever more difficult choice of the optimal method to apply to their problem. This volume shows how Bayes can be the answer. The examination and discussion on the foundations work towards the goal of proper application of Bayesian methods by the scientific community. Individual papers range in focus from posterior distributions for non-dominated models, to combining optimization and randomization approaches for the design of clinical trials, and classification of archaeological fragments with Bayesian networks.
Advances in Growth Curve Models: Topics from the Indian Statistical Institute is developed from the Indian Statistical Institute's A National Conference on Growth Curve Models. This conference took place between March 28-30, 2012 in Giridih, Jharkhand, India. Jharkhand is a tribal area. Advances in Growth Curve Models: Topics from the Indian Statistical Institute shares the work of researchers in growth models used in multiple fields. A growth curve is an empirical model of the evolution of a quantity over time. Case studies and theoretical findings, important applications in everything from health care to population projection, form the basis of this volume. Growth curves in longitudinal studies are widely used in many disciplines including: Biology, Population studies, Economics, Biological Sciences, SQC, Sociology, Nano-biotechnology, and Fluid mechanics. Some included reports are research topics that have just been developed, whereas others present advances in existing literature. Both included tools and techniques will assist students and researchers in their future work. Also included is a discussion of future applications of growth curve models.
The last twenty years have witnessed a significant growth of interest in optimal factorial designs, under possible model uncertainty, via the minimum aberration and related criteria. This book gives, for the first time in book form, a comprehensive and up-to-date account of this modern theory. Many major classes of designs are covered in the book. While maintaining a high level of mathematical rigor, it also provides extensive design tables for research and practical purposes. Apart from being useful to researchers and practitioners, the book can form the core of a graduate level course in experimental design.
My ?rst encounter with the world of crime and punishment was more than two decades ago, and it has since undergone vast changes. No one could have foreseen that crime-related problems would occupy such a prominent position in cultural awareness. Crime is on the rise, the public attention devoted to it has increased even more, and its political importance has mushroomed. The major change in the 1990s was perhaps the transformation of crime into a safety issue. Crime is no longer a matter involving offenders, victims, the police and the courts, it involves everyone and any number of agencies and institutions from security companies to the local authorities and from schools to pub and restaurant owners. Crime has become a much larger complex than the judicial system-a complex organized mentally and institutionally around this one concept of safety. In this book I make an effort to get to the bottom of this complex. It is the sequel to my dissertation Crime and Morality-The Moral Signi?cance of Criminal Justice in a Postmodern Culture (2000), where I hold that the victim became the essence of crime in Western culture, and that this in turn shaped public morality. In the second half of the twentieth century, a personal morality based on an awareness of our own and other people's vulnerability, i. e. potential victimhood, succeeded the ethics of duty.
Markov random field (MRF) theory provides a basis for modeling contextual constraints in visual processing and interpretation. It enables us to develop optimal vision algorithms systematically when used with optimization principles. This book presents a comprehensive study on the use of MRFs for solving computer vision problems. Various vision models are presented in a unified framework, including image restoration and reconstruction, edge and region segmentation, texture, stereo and motion, object matching and recognition, and pose estimation. This third edition includes the most recent advances and has new and expanded sections on topics such as: Bayesian Network; Discriminative Random Fields; Strong Random Fields; Spatial-Temporal Models; Learning MRF for Classification. This book is an excellent reference for researchers working in computer vision, image processing, statistical pattern recognition and applications of MRFs. It is also suitable as a text for advanced courses in these areas.
This volume describes how to develop Bayesian thinking, modelling and computation both from philosophical, methodological and application point of view. It further describes parametric and nonparametric Bayesian methods for modelling and how to use modern computational methods to summarize inferences using simulation. The book covers wide range of topics including objective and subjective Bayesian inferences with a variety of applications in modelling categorical, survival, spatial, spatiotemporal, Epidemiological, software reliability, small area and micro array data. The book concludes with a chapter on how to teach Bayesian thoughts to nonstatisticians.
The Minimum Message Length (MML) Principle is an information-theoretic approach to induction, hypothesis testing, model selection, and statistical inference. MML, which provides a formal specification for the implementation of Occam's Razor, asserts that the a ~besta (TM) explanation of observed data is the shortest. Further, an explanation is acceptable (i.e. the induction is justified) only if the explanation is shorter than the original data. This book gives a sound introduction to the Minimum Message Length Principle and its applications, provides the theoretical arguments for the adoption of the principle, and shows the development of certain approximations that assist its practical application. MML appears also to provide both a normative and a descriptive basis for inductive reasoning generally, and scientific induction in particular. The book describes this basis and aims to show its relevance to the Philosophy of Science. Statistical and Inductive Inference by Minimum Message Length will be of special interest to graduate students and researchers in Machine Learning and Data Mining, scientists and analysts in various disciplines wishing to make use of computer techniques for hypothesis discovery, statisticians and econometricians interested in the underlying theory of their discipline, and persons interested in the Philosophy of Science. The book could also be used in a graduate-level course in Machine Learning and Estimation and Model-selection, Econometrics and Data Mining. "Any statistician interested in the foundations of the discipline, or the deeper philosophical issues of inference, will find this volume a rewarding read." Short Book Reviews of theInternational Statistical Institute, December 2005
This book provides a fresh approach to reliability theory, an area that has gained increasing relevance in fields from statistics and engineering to demography and insurance. Its innovative use of quantile functions gives an analysis of lifetime data that is generally simpler, more robust, and more accurate than the traditional methods, and opens the door for further research in a wide variety of fields involving statistical analysis. In addition, the book can be used to good effect in the classroom as a text for advanced undergraduate and graduate courses in Reliability and Statistics.
This book reports a literature review on kaizen, its industrial applications, critical success factors, benefits gained, journals that publish about it, main authors (research groups) and universities. Kaizen is treated in this book in three stages: planning, implementation and control. The authors provide a questionnaire designed with activities in every stage, highlighting the benefits gained in each stage. The study has been applied to more than 400 managers and leaders in continuous improvement in Mexican maquiladoras. A univariate analysis is provided to the activities in every stage. Moreover, structural equation models associating those activities with the benefits gained are presented for a statistical validation. Such a relationship between activities and benefits helps managers to identify the most important factor affecting their benefits and financial income.
This book focuses on dealing with large-scale data, a field
commonly referred to as data mining. The book is divided into three
sections. The first deals with an introduction to statistical
aspects of data mining and machine learning and includes
applications to text analysis, computer intrusion detection, and
hiding of information in digital files. The second section focuses
on a variety of statistical methodologies that have proven to be
effective in data mining applications. These include clustering,
classification, multivariate density estimation, tree-based
methods, pattern recognition, outlier detection, genetic
algorithms, and dimensionality reduction. The third section focuses
on data visualization and covers issues of visualization of
high-dimensional data, novel graphical techniques with a focus on
human factors, interactive graphics, and data visualization using
virtual reality. This book represents a thorough cross section of
internationally renowned thinkers who are inventing methods for
dealing with a new data paradigm.
Elements of Large Sample Theory provides a unified treatment of first-order large-sample theory. It discusses a broad range of applications including introductions to density estimation, the bootstrap, and the asymptotics of survey methodology written at an elementary level. The book is suitable for students at the Master's level in statistics and in aplied fields who have a background of two years of calculus. E.L. Lehmann is Professor of Statistics Emeritus at the University of California, Berkeley. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences, and the recipient of honorary degrees from the University of Leiden, The Netherlands, and the University of Chicago. Also available: E.L. Lehmann and George Casella, Theory at Point Estimation, Second Edition. Springer-Verlag New York, Inc., 1998, 640 pp., Cloth, ISBN 0-387-98502-6. E.L. Lehmann, Testing Statistical Hypotheses, Second Edition. Springer-Verlag New York, Inc., 1997, 624 pp., Cloth, ISBN 0-387-94919-4.
The development of Operations Research (OR) requires constant improvements, such as the integration of research results with business applications and innovative educational practice. The full deployment and commercial exploitation of goods and services generally need the construction of strong synergies between educational institutions and businesses. The IO2015 -XVII Congress of APDIO aims at strengthening the knowledge triangle in education, research and innovation, in order to maximize the contribution of OR for sustainable growth, the promoting of a knowledge-based economy, and the smart use of finite resources. The IO2015-XVII Congress of APDIO is a privileged meeting point for the promotion and dissemination of OR and related disciplines, through the exchange of ideas among teachers, researchers, students , and professionals with different background, but all sharing a common desire that is the development of OR.
Selected papers submitted by participants of the international Conference "Stochastic Analysis and Applied Probability 2010" ( www.saap2010.org ) make up the basis of this volume. The SAAP 2010 was held in Tunisia, from 7-9 October, 2010, and was organized by the "Applied Mathematics & Mathematical Physics" research unit of the preparatory institute to the military academies of Sousse (Tunisia), chaired by Mounir Zili. The papers cover theoretical, numerical and applied aspects of stochastic processes and stochastic differential equations. The study of such topic is motivated in part by the need to model, understand, forecast and control the behavior of many natural phenomena that evolve in time in a random way. Such phenomena appear in the fields of finance, telecommunications, economics, biology, geology, demography, physics, chemistry, signal processing and modern control theory, to mention just a few. As this book emphasizes the importance of numerical and theoretical studies of the stochastic differential equations and stochastic processes, it will be useful for a wide spectrum of researchers in applied probability, stochastic numerical and theoretical analysis and statistics, as well as for graduate students. To make it more complete and accessible for graduate students, practitioners and researchers, the editors Mounir Zili and Daria Filatova have included a survey dedicated to the basic concepts of numerical analysis of the stochastic differential equations, written by Henri Schurz.
This undergraduate text distils the wisdom of an experienced
teacher and yields, to the mutual advantage of students and their
instructors, a sound and stimulating introduction to probability
theory. The accent is on its essential role in statistical theory
and practice, built on the use of illustrative examples and the
solution of problems from typical examination papers.
Mathematically-friendly for first and second year undergraduate
students, the book is also a reference source for workers in a wide
range of disciplines who are aware that even the simpler aspects of
probability theory are not simple. |
You may like...
English SATs Grammar, Punctuation and…
Kate Woodford, Elizabeth Walter
Paperback
(1)R187 Discovery Miles 1 870
Intentional from the Start - Guiding…
Carolyn Helmers, Susan Vincent
Paperback
R966
Discovery Miles 9 660
Comprehension Ninja for Ages 7-8…
Andrew Jennings, Adam Bushnell
Paperback
R649
Discovery Miles 6 490
English SATs Catch Up Reading: York…
Wendy Cherry, Emma Wilkinson
Paperback
(1)R146 Discovery Miles 1 460
Scholastic News Leveled Informational…
Scholastic Teacher Resources
Paperback
The Writing Rope - A Framework for…
Joan Sedita, Jan Hasbrouck
Paperback
R1,043
Discovery Miles 10 430
|