![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
This volume, which highlights recent advances in statistical methodology and applications, is divided into two main parts. The first part presents theoretical results on estimation techniques in functional statistics, while the second examines three key areas of application: estimation problems in queuing theory, an application in signal processing, and the copula approach to epidemiologic modelling. The book's peer-reviewed contributions are based on papers originally presented at the Marrakesh International Conference on Probability and Statistics held in December 2013.
The celebrated Parisi solution of the Sherrington-Kirkpatrick model for spin glasses is one of the most important achievements in the field of disordered systems. Over the last three decades, through the efforts of theoretical physicists and mathematicians, the essential aspects of the Parisi solution were clarified and proved mathematically. The core ideas of the theory that emerged are the subject of this book, including the recent solution of the Parisi ultrametricity conjecture and a conceptually simple proof of the Parisi formula for the free energy. The treatment is self-contained and should be accessible to graduate students with a background in probability theory, with no prior knowledge of spin glasses. The methods involved in the analysis of the Sherrington-Kirkpatrick model also serve as a good illustration of such classical topics in probability as the Gaussian interpolation and concentration of measure, Poisson processes, and representation results for exchangeable arrays.
This research monograph provides a synthesis of a number of statistical tests and measures, which, at first consideration, appear disjoint and unrelated. Numerous comparisons of permutation and classical statistical methods are presented, and the two methods are compared via probability values and, where appropriate, measures of effect size. Permutation statistical methods, compared to classical statistical methods, do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This text takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field in statistics. This topic is new in that it took modern computing power to make permutation methods available to people working in the mainstream of research. lly-informed="" audience,="" and="" can="" also="" easily="" serve="" as="" textbook="" in="" graduate="" course="" departments="" such="" statistics,="" psychology,="" or="" biology.="" particular,="" the="" audience="" for="" book="" is="" teachers="" of="" practicing="" statisticians,="" applied="" quantitative="" students="" fields="" medical="" research,="" epidemiology,="" public="" health,="" biology.
This book provides an introduction to operational research methods and their application in the agrifood and environmental sectors. It explains the need for multicriteria decision analysis and teaches users how to use recent advances in multicriteria and clustering classification techniques in practice. Further, it presents some of the most common methodologies for statistical analysis and mathematical modeling, and discusses in detail ten examples that explain and show “hands-on” how operational research can be used in key decision-making processes at enterprises in the agricultural food and environmental industries. As such, the book offers a valuable resource especially well suited as a textbook for postgraduate courses.
In this book, an integrated introduction to statistical inference is provided from a frequentist likelihood-based viewpoint. Classical results are presented together with recent developments, largely built upon ideas due to R.A. Fisher. The term "neo-Fisherian" highlights this.After a unified review of background material (statistical models, likelihood, data and model reduction, first-order asymptotics) and inference in the presence of nuisance parameters (including pseudo-likelihoods), a self-contained introduction is given to exponential families, exponential dispersion models, generalized linear models, and group families. Finally, basic results of higher-order asymptotics are introduced (index notation, asymptotic expansions for statistics and distributions, and major applications to likelihood inference).The emphasis is more on general concepts and methods than on regularity conditions. Many examples are given for specific statistical models. Each chapter is supplemented with problems and bibliographic notes. This volume can serve as a textbook in intermediate-level undergraduate and postgraduate courses in statistical inference.
The main body of this book is devoted to statistical physics, whereas much less emphasis is given to thermodynamics. In particular, the idea is to present the most important outcomes of thermodynamics - most notably, the laws of thermodynamics - as conclusions from derivations in statistical physics. Special emphasis is on subjects that are vital to engineering education. These include, first of all, quantum statistics, like the Fermi-Dirac distribution, as well as diffusion processes, both of which are fundamental to a sound understanding of semiconductor devices. Another important issue for electrical engineering students is understanding of the mechanisms of noise generation and stochastic dynamics in physical systems, most notably in electric circuitry. Accordingly, the fluctuation-dissipation theorem of statistical mechanics, which is the theoretical basis for understanding thermal noise processes in systems, is presented from a signals-and-systems point of view, in a way that is readily accessible for engineering students and in relation with other courses in the electrical engineering curriculum, like courses on random processes.
New Perspectives in Partial Least Squares and Related Methods shares original, peer-reviewed research from presentations during the 2012 partial least squares methods meeting (PLS 2012). This was the 7th meeting in the series of PLS conferences and the first to take place in the USA. PLS is an abbreviation for Partial Least Squares and is also sometimes expanded as projection to latent structures. This is an approach for modeling relations between data matrices of different types of variables measured on the same set of objects. The twenty-two papers in this volume, which include three invited contributions from our keynote speakers, provide a comprehensive overview of the current state of the most advanced research related to PLS and related methods. Prominent scientists from around the world took part in PLS 2012 and their contributions covered the multiple dimensions of the partial least squares-based methods. These exciting theoretical developments ranged from partial least squares regression and correlation, component based path modeling to regularized regression and subspace visualization. In following the tradition of the six previous PLS meetings, these contributions also included a large variety of PLS approaches such as PLS metamodels, variable selection, sparse PLS regression, distance based PLS, significance vs. reliability, and non-linear PLS. Finally, these contributions applied PLS methods to data originating from the traditional econometric/economic data to genomics data, brain images, information systems, epidemiology, and chemical spectroscopy. Such a broad and comprehensive volume will also encourage new uses of PLS models in work by researchers and students in many fields.
Although there are many books on mathematical finance, few deal with the statistical aspects of modern data analysis as applied to financial problems. This textbook fills this gap by addressing some of the most challenging issues facing financial engineers. It shows how sophisticated mathematics and modern statistical techniques can be used in the solutions of concrete financial problems. Concerns of risk management are addressed by the study of extreme values, the fitting of distributions with heavy tails, the computation of values at risk (VaR), and other measures of risk. Principal component analysis (PCA), smoothing, and regression techniques are applied to the construction of yield and forward curves. Time series analysis is applied to the study of temperature options and nonparametric estimation. Nonlinear filtering is applied to Monte Carlo simulations, option pricing and earnings prediction. This textbook is intended for undergraduate students majoring in financial engineering, or graduate students in a Master in finance or MBA program. It is sprinkled with practical examples using market data, and each chapter ends with exercises. Practical examples are solved in the R computing environment. They illustrate problems occurring in the commodity, energy and weather markets, as well as the fixed income, equity and credit markets.The examples, experiments and problem setsare based on the library Rsafd developed for the purpose of the text. The book should help quantitative analysts learn and implement advanced statistical concepts. Also, it will be valuable for researchers wishing to gain experience with financial data, implement and test mathematical theories, and address practical issues that are often ignored or underestimated in academic curricula. This is the new, fully-revised edition to the book "Statistical Analysis of Financial Data in S-Plus." Rene Carmona is the Paul M. Wythes '55 Professor of Engineering and Finance at Princeton University in the department of Operations Research and Financial Engineering, and Director of Graduate Studies of the Bendheim Center for Finance. His publications include over one hundred articles and eight books in probability and statistics. He was elected Fellow of the Institute of Mathematical Statistics in 1984, and of the Society for Industrial and Applied Mathematics in 2010. He is on the editorial boardof several peer-reviewed journals and book series. Professor Carmona has developed computer programs for teaching statistics and research in signal analysis and financial engineering. He has workedfor many years on energy, the commodity markets and more recently in environmental economics, and he is recognized as a leadingresearcher and expert in these areas."
The book considers foundational thinking in quantum theory, focusing on the role the fundamental principles and principle thinking there, including thinking that leads to the invention of new principles, which is, the book contends, one of the ultimate achievements of theoretical thinking in physics and beyond. The focus on principles, prominent during the rise and in the immediate aftermath of quantum theory, has been uncommon in more recent discussions and debates concerning it. The book argues, however, that exploring the fundamental principles and principle thinking is exceptionally helpful in addressing the key issues at stake in quantum foundations and the seemingly interminable debates concerning them. Principle thinking led to major breakthroughs throughout the history of quantum theory, beginning with the old quantum theory and quantum mechanics, the first definitive quantum theory, which it remains within its proper (nonrelativistic) scope. It has, the book also argues, been equally important in quantum field theory, which has been the frontier of quantum theory for quite a while now, and more recently, in quantum information theory, where principle thinking was given new prominence. The approach allows the book to develop a new understanding of both the history and philosophy of quantum theory, from Planck's quantum to the Higgs boson, and beyond, and of the thinking the key founding figures, such as Einstein, Bohr, Heisenberg, Schroedinger, and Dirac, as well as some among more recent theorists. The book also extensively considers the nature of quantum probability, and contains a new interpretation of quantum mechanics, "the statistical Copenhagen interpretation." Overall, the book's argument is guided by what Heisenberg called "the spirit of Copenhagen," which is defined by three great divorces from the preceding foundational thinking in physics-reality from realism, probability from causality, and locality from relativity-and defined the fundamental principles of quantum theory accordingly.
This text presents the two complementary aspects of thermal physics as an integrated theory of the properties of matter. Conceptual understanding is promoted by thorough development of basic concepts. In contrast to many texts, statistical mechanics, including discussion of the required probability theory, is presented first. This provides a statistical foundation for the concept of entropy, which is central to thermal physics. A unique feature of the book is the development of entropy based on Boltzmann's 1877 definition; this avoids contradictions or ad hoc corrections found in other texts. Detailed fundamentals provide a natural grounding for advanced topics, such as black-body radiation and quantum gases. An extensive set of problems (solutions are available for lecturers through the OUP website), many including explicit computations, advance the core content by probing essential concepts. The text is designed for a two-semester undergraduate course but can be adapted for one-semester courses emphasizing either aspect of thermal physics. It is also suitable for graduate study.
The contributors to this volume include many of the distinguished researchers in this area. Many of these scholars have collaborated with Joseph McKean to develop underlying theory for these methods, obtain small sample corrections, and develop efficient algorithms for their computation. The papers cover the scope of the area, including robust nonparametric rank-based procedures through Bayesian and big data rank-based analyses. Areas of application include biostatistics and spatial areas. Over the last 30 years, robust rank-based and nonparametric methods have developed considerably. These procedures generalize traditional Wilcoxon-type methods for one- and two-sample location problems. Research into these procedures has culminated in complete analyses for many of the models used in practice including linear, generalized linear, mixed, and nonlinear models. Settings are both multivariate and univariate. With the development of R packages in these areas, computation of these procedures is easily shared with readers and implemented. This book is developed from the International Conference on Robust Rank-Based and Nonparametric Methods, held at Western Michigan University in April 2015.
Stochastic analysis has a variety of applications to biological systems as well as physical and engineering problems, and its applications to finance and insurance have bloomed exponentially in recent times. The goal of this book is to present a broad overview of the range of applications of stochastic analysis and some of its recent theoretical developments. This includes numerical simulation, error analysis, parameter estimation, as well as control and robustness properties for stochastic equations. The book also covers the areas of backward stochastic differential equations via the (non-linear) G-Brownian motion and the case of jump processes. Concerning the applications to finance, many of the articles deal with the valuation and hedging of credit risk in various forms, and include recent results on markets with transaction costs.
Box and Jenkins (1970) made the idea of obtaining a stationary time series by differencing the given, possibly nonstationary, time series popular. Numerous time series in economics are found to have this property. Subsequently, Granger and Joyeux (1980) and Hosking (1981) found examples of time series whose fractional difference becomes a short memory process, in particular, a white noise, while the initial series has unbounded spectral density at the origin, i.e. exhibits long memory.Further examples of data following long memory were found in hydrology and in network traffic data while in finance the phenomenon of strong dependence was established by dramatic empirical success of long memory processes in modeling the volatility of the asset prices and power transforms of stock market returns.At present there is a need for a text from where an interested reader can methodically learn about some basic asymptotic theory and techniques found useful in the analysis of statistical inference procedures for long memory processes. This text makes an attempt in this direction. The authors provide in a concise style a text at the graduate level summarizing theoretical developments both for short and long memory processes and their applications to statistics. The book also contains some real data applications and mentions some unsolved inference problems for interested researchers in the field.
This is the first book to compare eight LDFs by different types of datasets, such as Fisher's iris data, medical data with collinearities, Swiss banknote data that is a linearly separable data (LSD), student pass/fail determination using student attributes, 18 pass/fail determinations using exam scores, Japanese automobile data, and six microarray datasets (the datasets) that are LSD. We developed the 100-fold cross-validation for the small sample method (Method 1) instead of the LOO method. We proposed a simple model selection procedure to choose the best model having minimum M2 and Revised IP-OLDF based on MNM criterion was found to be better than other M2s in the above datasets. We compared two statistical LDFs and six MP-based LDFs. Those were Fisher's LDF, logistic regression, three SVMs, Revised IP-OLDF, and another two OLDFs. Only a hard-margin SVM (H-SVM) and Revised IP-OLDF could discriminate LSD theoretically (Problem 2). We solved the defect of the generalized inverse matrices (Problem 3). For more than 10 years, many researchers have struggled to analyze the microarray dataset that is LSD (Problem 5). If we call the linearly separable model "Matroska," the dataset consists of numerous smaller Matroskas in it. We develop the Matroska feature selection method (Method 2). It finds the surprising structure of the dataset that is the disjoint union of several small Matroskas. Our theory and methods reveal new facts of gene analysis.
Machine learning is concerned with the analysis of large data and multiple variables. However, it is also often more sensitive than traditional statistical methods to analyze small data. The first volume reviewed subjects like optimal scaling, neural networks, factor analysis, partial least squares, discriminant analysis, canonical analysis, and fuzzy modeling. This second volume includes various clustering models, support vector machines, Bayesian networks, discrete wavelet analysis, genetic programming, association rule learning, anomaly detection, correspondence analysis, and other subjects. Both the theoretical bases and the step by step analyses are described for the benefit of non-mathematical readers. Each chapter can be studied without the need to consult other chapters. Traditional statistical tests are, sometimes, priors to machine learning methods, and they are also, sometimes, used as contrast tests. To those wishing to obtain more knowledge of them, we recommend to additionally study (1) Statistics Applied to Clinical Studies 5th Edition 2012, (2) SPSS for Starters Part One and Two 2012, and (3) Statistical Analysis of Clinical Data on a Pocket Calculator Part One and Two 2012, written by the same authors, and edited by Springer, New York.
Inference infinite sampling is a new development that is essential for the field of sampling. In addition to covering the majority of well known sampling plans and procedures, this study covers the important topics of superpopulation approach, randomized response, non-response and resampling techniques. The authors also provide extensive sets of problems ranging in difficulty, making this book beneficial to students.
Epidemiologic Studies in Cancer Prevention and Screening is the first comprehensive overview of the evidence base for both cancer prevention and screening. This book is directed to the many professionals in government, academia, public health and health care who need up to date information on the potential for reducing the impact of cancer, including physicians, nurses, epidemiologists, and research scientists. The main aim of the book is to provide a realistic appraisal of the evidence for both cancer prevention and cancer screening. In addition, the book provides an accounting of the extent programs based on available knowledge have impacted populations. It does this through: 1. Presentation of a rigorous and realistic evaluation of the evidence for population-based interventions in prevention of and screening for cancer, with particular relevance to those believed to be applicable now, or on the cusp of application 2. Evaluation of the relative contributions of prevention and screening 3. Discussion of how, within the health systems with which the authors are familiar, prevention and screening for cancer can be enhanced. Overview of the evidence base for cancer prevention and screening, as demonstrated in Epidemiologic Studies in Cancer Prevention and Screening, is critically important given current debates within the scientific community. Of the five components of cancer control, prevention, early detection (including screening) treatment, rehabilitation and palliative care, prevention is regarded as the most important. Yet the knowledge available to prevent many cancers is incomplete, and even if we know the main causal factors for a cancer, we often lack the understanding to put this knowledge into effect. Further, with the long natural history of most cancers, it could take many years to make an appreciable impact upon the incidence of the cancer. Because of these facts, many have come to believe that screening has the most potential for reduction of the burden of cancer. Yet, through trying to apply the knowledge gained on screening for cancer, the scientific community has recognized that screening can have major disadvantages and achieve little at substantial cost. This reduces the resources that are potentially available both for prevention and for treatment.
How can large bonuses sometimes make CEOs less productive?Why is revenge so important to us?How can confusing directions actually help us?Why is there a difference between what we think will make us happy and what really makes us happy? In his groundbreaking book, Predictably Irrational, social scientist Dan Ariely revealed the multiple biases that lead us to make unwise decisions. Now, in The Upside of Irrationality, he exposes the surprising negative and positive effects irrationality can have on our lives. Focusing on our behaviors at work and in relationships, he offers new insights and eye-opening truths about what really motivates us on the job, how one unwise action can become a long-term bad habit, how we learn to love the ones we're with, and more. The Upside of Irrationality will change the way we see ourselves at work and at home--and cast our irrational behaviors in a more nuanced light.
What are the current trends in housing? Is my planned project commercially viable? What should be my marketing and advertisement strategies? These are just some of the questions real estate agents, landlords and developers ask researchers to answer. But to find the answers, researchers are faced with a wide variety of methods that measure housing preferences and choices. To select and value a valid research method, one needs a well-structured overview of the methods that are used in housing preference and housing choice research. This comprehensive introduction to this field offers just such an overview. It discusses and compares numerous methods, detailing the potential limitation of each one, and it reaches beyond methodology, illustrating how thoughtful consideration of methods and techniques in research can help researchers and other professionals to deliver products and services that are more in line with residents needs."
This book treats the notion of morphisms in spatial analysis, paralleling these concepts in spatial statistics (Part I) and spatial econometrics (Part II). The principal concept is morphism (e.g., isomorphisms, homomorphisms, and allomorphisms), which is defined as a structure preserving the functional linkage between mathematical properties or operations in spatial statistics and spatial econometrics, among other disciplines. The purpose of this book is to present selected conceptions in both domains that are structurally the same, even though their labelling and the notation for their elements may differ. As the approaches presented here are applied to empirical materials in geography and economics, the book will also be of interest to scholars of regional science, quantitative geography and the geospatial sciences. It is a follow-up to the book "Non-standard Spatial Statistics and Spatial Econometrics" by the same authors, which was published by Springer in 2011.
This book presents powerful techniques for solving global optimization problems on manifolds by means of evolutionary algorithms, and shows in practice how these techniques can be applied to solve real-world problems. It describes recent findings and well-known key facts in general and differential topology, revisiting them all in the context of application to current optimization problems. Special emphasis is put on game theory problems. Here, these problems are reformulated as constrained global optimization tasks and solved with the help of Fuzzy ASA. In addition, more abstract examples, including minimizations of well-known functions, are also included. Although the Fuzzy ASA approach has been chosen as the main optimizing paradigm, the book suggests that other metaheuristic methods could be used as well. Some of them are introduced, together with their advantages and disadvantages. Readers should possess some knowledge of linear algebra, and of basic concepts of numerical analysis and probability theory. Many necessary definitions and fundamental results are provided, with the formal mathematical requirements limited to a minimum, while the focus is kept firmly on continuous problems. The book offers a valuable resource for students, researchers and practitioners. It is suitable for university courses on optimization and for self-study.
This book offers hands-on statistical tools for business professionals by focusing on the practical application of a single-equation regression. The authors discuss commonly applied econometric procedures, which are useful in building regression models for economic forecasting and supporting business decisions. A significant part of the book is devoted to traps and pitfalls in implementing regression analysis in real-world scenarios. The book consists of nine chapters, the final two of which are fully devoted to case studies. Today's business environment is characterised by a huge amount of economic data. Making successful business decisions under such data-abundant conditions requires objective analytical tools, which can help to identify and quantify multiple relationships between dozens of economic variables. Single-equation regression analysis, which is discussed in this book, is one such tool. The book offers a valuable guide and is relevant in various areas of economic and business analysis, including marketing, financial and operational management.
Biological and other natural processes have always been a source of inspiration for computer science and information technology. Many emerging problem solving techniques integrate advanced evolution and cooperation strategies, encompassing a range of spatio-temporal scales for visionary conceptualization of evolutionary computation. The previous editions of NICSO were held in Granada, Spain (2006), Acireale, Italy (2007), Tenerife, Spain (2008), and again in Granada in 2010. NICSO evolved to be one of the most interesting and profiled workshops in nature inspired computing. NICSO 2011 has offered an inspiring environment for debating the state of the art ideas and techniques in nature inspired cooperative strategies and a comprehensive image on recent applications of these ideas and techniques. The topics covered by this volume include Swarm Intelligence (such as Ant and Bee Colony Optimization), Genetic Algorithms, Multiagent Systems, Coevolution and Cooperation strategies, Adversarial Models, Synergic Building Blocks, Complex Networks, Social Impact Models, Evolutionary Design, Self Organized Criticality, Evolving Systems, Cellular Automata, Hybrid Algorithms, and Membrane Computing (P-Systems). |
You may like...
Statistics for Management and Economics
Gerald Keller, Nicoleta Gaciu
Paperback
Integrated Population Biology and…
Arni S.R. Srinivasa Rao, C.R. Rao
Hardcover
R6,219
Discovery Miles 62 190
Fundamentals of Social Research Methods
Claire Bless, Craig Higson-Smith, …
Paperback
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
|