![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
This book is an undergraduate text that introduces students to commonly used statistical methods in economics. Using examples based on contemporary economic issues and readily available data, it not only explains the mechanics of the various methods, but also guides students to connect statistical results to detailed economic interpretations. Because the goal is for students to be able to apply the statistical methods presented, online sources for economic data and directions for performing each task in Excel are also included.
Stochastic Differential Equations have become increasingly important in modelling complex systems in physics, chemistry, biology, climatology and other fields. This book examines and provides systems for practitioners to use, and provides a number of case studies to show how they can work in practice.
Provides a logical framework for considering and evaluating standard setting procedures Covers formal development of a psychometric theory for standard setting Develops logical argument for evaluation procedures for standard setting processes Contains detailed analyses of several standard setting methods Includes problem sets at the ends of chapters that focus on common problems with standard setting methods
For almost two decades this has been the classical textbook on applications of operator algebra theory to quantum statistical physics. It describes the general structure of equilibrium states, the KMS-condition and stability, quantum spin systems and continuous systems.Major changes in the new edition relate to Bose--Einstein condensation, the dynamics of the X-Y model and questions on phase transitions. Notes and remarks have been considerably augmented.
This book is different from other books on measure theory in that it accepts probability theory as an essential part of measure theory. This means that many examples are taken from probability; that probabilistic concepts such as independence, Markov processes, and conditional expectations are integrated into the text rather than being relegate to an appendix; that more attention is paid to the role of algebras than is customary; and that the metric defining the distance between sets as the measure of their symmetric difference is exploited more than is customary.
Develops insights into solving complex problems in engineering, biomedical sciences, social science and economics based on artificial intelligence. Some of the problems studied are in interstate conflict, credit scoring, breast cancer diagnosis, condition monitoring, wine testing, image processing and optical character recognition. The author discusses and applies the concept of flexibly-bounded rationality which prescribes that the bounds in Nobel Laureate Herbert Simon's bounded rationality theory are flexible due to advanced signal processing techniques, Moore's Law and artificial intelligence. Artificial Intelligence Techniques for Rational Decision Making examines and defines the concepts of causal and correlation machines and applies the transmission theory of causality as a defining factor that distinguishes causality from correlation. It develops the theory of rational counterfactuals which are defined as counterfactuals that are intended to maximize the attainment of a particular goal within the context of a bounded rational decision making process. Furthermore, it studies four methods for dealing with irrelevant information in decision making: Theory of the marginalization of irrelevant information Principal component analysis Independent component analysis Automatic relevance determination method In addition it studies the concept of group decision making and various ways of effecting group decision making within the context of artificial intelligence. Rich in methods of artificial intelligence including rough sets, neural networks, support vector machines, genetic algorithms, particle swarm optimization, simulated annealing, incremental learning and fuzzy networks, this book will be welcomed by researchers and students working in these areas.
Rubinstein is the pioneer of the well-known score function and cross-entropy methods. Accessible to a broad audience of engineers, computer scientists, mathematicians, statisticians and in general anyone, theorist and practitioner, who is interested in smart simulation, fast optimization, learning algorithms, and image processing.
This monograph is an attempt to unify existing works in the field of random sets, random variables, and linguistic random variables with respect to statistical analysis. It is intended to be a tutorial research compendium. The material of the work is mainly based on the postdoctoral thesis (Ha- bilitationsschrift) of the first author and on several papers recently published by both authors. The methods form the basis of a user-friendly software tool which supports the statistical inferenee in the presence of vague data. Parts of the manuscript have been used in courses for graduate level students of mathematics and eomputer scienees held by the first author at the Technical University of Braunschweig. The textbook is designed for readers with an advanced knowledge of mathematics. The idea of writing this book came from Professor Dr. H. Skala. Several of our students have significantly contributed to its preparation. We would like to express our gratitude to Reinhard Elsner for his support in typesetting the book, Jorg Gebhardt and Jorg Knop for preparing the drawings, Michael Eike and Jiirgen Freckmann for implementing the programming system and Giinter Lehmann and Winfried Boer for proofreading the manuscript. This work was partially supported by the Fraunhofer-Gesellschaft. We are indebted to D. Reidel Publishing Company for making the pub- lication of this book possible and would especially like to acknowledge the support whieh we received from our families on this project.
Newcomers to R are often intimidated by the command-line interface, the vast number of functions and packages, or the processes of importing data and performing a simple statistical analysis. The R Primer provides a collection of concise examples and solutions to R problems frequently encountered by new users of this statistical software. Rather than explore the many options available for every command as well as the ever-increasing number of packages, the book focuses on the basics of data preparation and analysis and gives examples that can be used as a starting point. The numerous examples illustrate a specific situation, topic, or problem, including data importing, data management, classical statistical analyses, and high-quality graphics production. Each example is self-contained and includes R code that can be run exactly as shown, enabling results from the book to be replicated. While base R is used throughout, other functions or packages are listed if they cover or extend the functionality. After working through the examples found in this text, new users of R will be able to better handle data analysis and graphics applications in R. Additional topics and R code are available from the book s supporting website at www.statistics.life.ku.dk/primer/
This book originated from our interest in sea surface temperature variability. Our initial, though entirely pragmatic, goal was to derive adequate mathemat ical tools for handling certain oceanographic problems. Eventually, however, these considerations went far beyond oceanographic applications partly because one of the authors is a mathematician. We found that many theoretical issues of turbulent transport problems had been repeatedly discussed in fields of hy drodynamics, plasma and solid matter physics, and mathematics itself. There are few monographs concerned with turbulent diffusion in the ocean (Csanady 1973, Okubo 1980, Monin and Ozmidov 1988). While selecting material for this book we focused, first, on theoretical issues that could be helpful for understanding mixture processes in the ocean, and, sec ond, on our own contribution to the problem. Mathematically all of the issues addressed in this book are concentrated around a single linear equation: the stochastic advection-diffusion equation. There is no attempt to derive universal statistics for turbulent flow. Instead, the focus is on a statistical description of a passive scalar (tracer) under given velocity statistics. As for applications, this book addresses only one phenomenon: transport of sea surface temperature anomalies. Hopefully, however, our two main approaches are applicable to other subjects."
The book is a collection of research level surveys on certain topics in probability theory, which will be of interest to graduate students and researchers.
Filling the need for a comprehensive guide on the subject, Applied Time Series Analysis for the Social Sciences presents time series analysis in an accessible format designed to appeal to students and professional researchers with little mathematical and statistical background. With a focus on social-science applications and a mix of theory, including detailed case studies provided throughout, the text examines various uses and interpretations of lagged dependent variables and common confusion in this area. An accompanying website with data sets and examples in Stats and R accompanies the text.
"Data Analysis with SPSS"is designed to teach students how to explore data in a systematic manner using the most popular professional social statistics program on the market today. Written in ten manageable chapters, this book first introduces students to the approach researchers use to frame research questions and the logic of establishing causal relations. Students are then oriented to the SPSS program and how to examine data sets. Subsequent chapters guide them through univariate analysis, bivariate analysis, graphic analysis, and multivariate analysis. Students conclude their course by learning how to write a research report and by engaging in their own research project. Each book is packaged with a disk containing the GSS (General Social Survey) file and the States data files. The GSS file contains 100 variables generated from interviews with 2,900 people, concerning their behaviors and attitudes on a wide variety of issues such as abortion, religion, prejudice, sexuality, and politics. The States data allows comparison of all 50 states with 400 variables indicating issues such as unemployment, environment, criminality, population, and education. Students will ultimately use these data to conduct their own independent research project with SPSS. Note: MySearchLab does not come automatically packaged with this text. To purchase MySearchLab, please visit: www.mysearchlab.com or you can purchase a ValuePack of the text + MySearchLab with Pearson eText (at no additional cost). ValuePack ISBN-10: 0205863728 / ValuePack ISBN-13: 9780205863723
The contributions by leading experts in this book focus on a variety of topics of current interest related to information-based complexity, ranging from function approximation, numerical integration, numerical methods for the sphere, and algorithms with random information, to Bayesian probabilistic numerical methods and numerical methods for stochastic differential equations.
This book has a dual purpose. One of these is to present material which selec tively will be appropriate for a quarter or semester course in time series analysis and which will cover both the finite parameter and spectral approach. The second object is the presentation of topics of current research interest and some open questions. I mention these now. In particular, there is a discussion in Chapter III of the types of limit theorems that will imply asymptotic nor mality for covariance estimates and smoothings of the periodogram. This dis cussion allows one to get results on the asymptotic distribution of finite para meter estimates that are broader than those usually given in the literature in Chapter IV. A derivation of the asymptotic distribution for spectral (second order) estimates is given under an assumption of strong mixing in Chapter V. A discussion of higher order cumulant spectra and their large sample properties under appropriate moment conditions follows in Chapter VI. Probability density, conditional probability density and regression estimates are considered in Chapter VII under conditions of short range dependence. Chapter VIII deals with a number of topics. At first estimates for the structure function of a large class of non-Gaussian linear processes are constructed. One can determine much more about this structure or transfer function in the non-Gaussian case than one can for Gaussian processes. In particular, one can determine almost all the phase information."
Maximum entropy and Bayesian methods have fundamental, central roles in scientific inference, and, with the growing availability of computer power, are being successfully applied in an increasing number of applications in many disciplines. This volume contains selected papers presented at the Thirteenth International Workshop on Maximum Entropy and Bayesian Methods. It includes an extensive tutorial section, and a variety of contributions detailing application in the physical sciences, engineering, law, and economics. Audience: Researchers and other professionals whose work requires the application of practical statistical inference.
Principal component analysis is central to the study of multivariate data. Although one of the earliest multivariate techniques it continues to be the subject of much research, ranging from new model- based approaches to algorithmic ideas from neural networks. It is extremely versatile with applications in many disciplines. The first edition of this book was the first comprehensive text written solely on principal component analysis. The second edition updates and substantially expands the original version, and is once again the definitive text on the subject. It includes core material, current research and a wide range of applications. Its length is nearly double that of the first edition. Researchers in statistics, or in other fields that use principal component analysis, will find that the book gives an authoritative yet accessible account of the subject. It is also a valuable resource for graduate courses in multivariate analysis. The book requires some knowledge of matrix algebra. Ian Jolliffe is Professor of Statistics at the University of Aberdeen. He is author or co-author of over 60 research papers and three other books. His research interests are broad, but aspects of principal component analysis have fascinated him and kept him busy for over 30 years.
Small noise is a good noise. In this work, we are interested in the problems of estimation theory concerned with observations of the diffusion-type process Xo = Xo, 0 ~ t ~ T, (0. 1) where W is a standard Wiener process and St(') is some nonanticipative smooth t function. By the observations X = {X , 0 ~ t ~ T} of this process, we will solve some t of the problems of identification, both parametric and nonparametric. If the trend S(-) is known up to the value of some finite-dimensional parameter St(X) = St((}, X), where (} E e c Rd , then we have a parametric case. The nonparametric problems arise if we know only the degree of smoothness of the function St(X), 0 ~ t ~ T with respect to time t. It is supposed that the diffusion coefficient c is always known. In the parametric case, we describe the asymptotical properties of maximum likelihood (MLE), Bayes (BE) and minimum distance (MDE) estimators as c --+ 0 and in the nonparametric situation, we investigate some kernel-type estimators of unknown functions (say, StO,O ~ t ~ T). The asymptotic in such problems of estimation for this scheme of observations was usually considered as T --+ 00 , because this limit is a direct analog to the traditional limit (n --+ 00) in the classical mathematical statistics of i. i. d. observations. The limit c --+ 0 in (0. 1) is interesting for the following reasons.
A self-contained treatment of stochastic processes arising from models for queues, insurance risk, and dams and data communication, using their sample function properties. The approach is based on the fluctuation theory of random walks, L vy processes, and Markov-additive processes, in which Wiener-Hopf factorisation plays a central role. This second edition includes results for the virtual waiting time and queue length in single server queues, while the treatment of continuous time storage processes is thoroughly revised and simplified. With its prerequisite of a graduate-level course in probability and stochastic processes, this book can be used as a text for an advanced course on applied probability models.
This volume has its origin in the Seventeenth International Workshop on Maximum Entropy and Bayesian Methods, MAXENT 97. The workshop was held at Boise State University in Boise, Idaho, on August 4 -8, 1997. As in the past, the purpose of the workshop was to bring together researchers in different fields to present papers on applications of Bayesian methods (these include maximum entropy) in science, engineering, medicine, economics, and many other disciplines. Thanks to significant theoretical advances and the personal computer, much progress has been made since our first Workshop in 1981. As indicated by several papers in these proceedings, the subject has matured to a stage in which computational algorithms are the objects of interest, the thrust being on feasibility, efficiency and innovation. Though applications are proliferating at a staggering rate, some in areas that hardly existed a decade ago, it is pleasing that due attention is still being paid to foundations of the subject. The following list of descriptors, applicable to papers in this volume, gives a sense of its contents: deconvolution, inverse problems, instrument (point-spread) function, model comparison, multi sensor data fusion, image processing, tomography, reconstruction, deformable models, pattern recognition, classification and group analysis, segmentation/edge detection, brain shape, marginalization, algorithms, complexity, Ockham's razor as an inference tool, foundations of probability theory, symmetry, history of probability theory and computability. MAXENT 97 and these proceedings could not have been brought to final form without the support and help of a number of people.
This book introduces the basic concepts and methods that are useful in the statistical analysis and modeling of the DNA-based marker and phenotypic data that arise in agriculture, forestry, experimental biology, and other fields. It concentrates on the linkage analysis of markers, map construction and quantitative trait locus (QTL) mapping, and assumes a background in regression analysis and maximum likelihood approaches. The strength of this book lies in the construction of general models and algorithms for linkage analysis, as well as in QTL mapping in any kind of crossed pedigrees initiated with inbred lines of crops.
This book discusses the need to carefully and prudently apply various regression techniques in order to obtain the full benefits. It also describes some of the techniques developed and used by the authors, presenting their innovative ideas regarding the formulation and estimation of regression decomposition models, hidden Markov chain, and the contribution of regressors in the set-theoretic approach, calorie poverty rate, and aggregate growth rate. Each of these techniques has applications that address a number of unanswered questions; for example, regression decomposition techniques reveal intra-household gender inequalities of consumption, intra-household allocation of resources and adult equivalent scales, while Hidden Markov chain models can forecast the results of future elections. Most of these procedures are presented using real-world data, and the techniques can be applied in other similar situations. Showing how difficult questions can be answered by developing simple models with simple interpretation of parameters, the book is a valuable resource for students and researchers in the field of model building.
By assuming it is possible to understand regression analysis without fully comprehending all its underlying proofs and theories, this introduction to the widely used statistical technique is accessible to readers who may have only a rudimentary knowledge of mathematics. Chapters discuss: -descriptive statistics using vector notation and the components
of a simple regression model;
Data clustering is a highly interdisciplinary field, the goal of which is to divide a set of objects into homogeneous groups such that objects in the same group are similar and objects in different groups are quite distinct. Thousands of theoretical papers and a number of books on data clustering have been published over the past 50 years. However, few books exist to teach people how to implement data clustering algorithms. This book was written for anyone who wants to implement or improve their data clustering algorithms. Using object-oriented design and programming techniques, Data Clustering in C++ exploits the commonalities of all data clustering algorithms to create a flexible set of reusable classes that simplifies the implementation of any data clustering algorithm. Readers can follow the development of the base data clustering classes and several popular data clustering algorithms. Additional topics such as data pre-processing, data visualization, cluster visualization, and cluster interpretation are briefly covered. This book is divided into three parts-- * Data Clustering and C++ Preliminaries: A review of basic concepts of data clustering, the unified modeling language, object-oriented programming in C++, and design patterns * A C++ Data Clustering Framework: The development of data clustering base classes * Data Clustering Algorithms: The implementation of several popular data clustering algorithms A key to learning a clustering algorithm is to implement and experiment the clustering algorithm. Complete listings of classes, examples, unit test cases, and GNU configuration files are included in the appendices of this book as well as in the CD-ROM of the book. The only requirements to compile the code are a modern C++ compiler and the Boost C++ libraries. |
You may like...
Data Analysis and Data Mining - An…
Adelchi Azzalini, Bruno Scarpa
Hardcover
R3,280
Discovery Miles 32 800
Scientific Computing - An Introductory…
Michael T. Heath
Paperback
Statistics for Management and Economics
Gerald Keller, Nicoleta Gaciu
Paperback
Integrated Population Biology and…
Arni S.R. Srinivasa Rao, C.R. Rao
Hardcover
R6,219
Discovery Miles 62 190
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
Advances in Quantum Monte Carlo
Shigenori Tanaka, Stuart M. Rothstein, …
Hardcover
R5,469
Discovery Miles 54 690
Order Statistics: Applications, Volume…
Narayanaswamy Balakrishnan, C.R. Rao
Hardcover
R3,377
Discovery Miles 33 770
|