![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
Most financial and investment decisions are based on considerations of possible future changes and require forecasts on the evolution of the financial world. Time series and processes are the natural tools for describing the dynamic behavior of financial data, leading to the required forecasts. This book presents a survey of the empirical properties of financial time series, their descriptions by means of mathematical processes, and some implications for important financial applications used in many areas like risk evaluation, option pricing or portfolio construction. The statistical tools used to extract information from raw data are introduced. Extensive multiscale empirical statistics provide a solid benchmark of stylized facts (heteroskedasticity, long memory, fat-tails, leverage ), in order to assess various mathematical structures that can capture the observed regularities. The author introduces a broad range of processes and evaluates them systematically against the benchmark, summarizing the successes and limitations of these models from an empirical point of view. The outcome is that only multiscale ARCH processes with long memory, discrete multiplicative structures and non-normal innovations are able to capture correctly the empirical properties. In particular, only a discrete time series framework allows to capture all the stylized facts in a process, whereas the stochastic calculus used in the continuum limit is too constraining. The present volume offers various applications and extensions for this class of processes including high-frequency volatility estimators, market risk evaluation, covariance estimation and multivariate extensions of the processes. The book discusses many practical implications and is addressed to practitioners and quants in the financial industry, as well as to academics, including graduate (Master or PhD level) students. The prerequisites are basic statistics and some elementary financial mathematics."
On May 27-31, 1985, a series of symposia was held at The University of Western Ontario, London, Canada, to celebrate the 70th birthday of Pro fessor V. M. Joshi. These symposia were chosen to reflect Professor Joshi's research interests as well as areas of expertise in statistical science among faculty in the Departments of Statistical and Actuarial Sciences, Economics, Epidemiology and Biostatistics, and Philosophy. From these symposia, the six volumes which comprise the "Joshi Festschrift" have arisen. The 117 articles in this work reflect the broad interests and high quality of research of those who attended our conference. We would like to thank alI of the contributors for their superb cooperation in helping us to complete this project. Our deepest gratitude must go to the three people who have spent so much of their time in the past year typing these volumes: Jackie BeU, Lise Constant, and Sandy Tamowski. This work has been printed from "camera ready" copy produced by our Vax 785 computer and QMS Lasergraphix printers, using the text processing software TEX. At the initiation of this project, we were neophytes in the use of this system. Thank you, J ackie, Lise, and Sandy, for having the persistence and dedication needed to complete this undertaking."
This book offers solutions to such topical problems as developing mathematical models and descriptions of typical distortions in applied forecasting problems; evaluating robustness for traditional forecasting procedures under distortionism and more.
This text deals with parametric and nonparametric density estimation from the maximum (penalized) likelihood point of view, including estimation under constraints such as unimodality and log-concavity. It is intended for graduate students in statistics, applied mathematics, and operations research, as well as for researchers and practitioners in the field. The focal points are existence and uniqueness of the estimators, almost sure convergence rates for the L1 error, and data-driven smoothing parameter selection methods, including their practical performance. The reader will gain insight into some of the generally applicable technical tools from probability theory (discrete parameter martingales) and applied mathematics (boundary, value problems and integration by parts tricks.) Convexity and convex optimization, as applied to maximum penalized likelihood estimation, receive special attention. The authors are with the Statistics Program of the Department of Food and Resource Economics in the College of Agriculture at the University of Delaware.
This book provides an accessible approach to Bayesian computing and data analysis, with an emphasis on the interpretation of real data sets. Following in the tradition of the successful first edition, this book aims to make a wide range of statistical modeling applications accessible using tested code that can be readily adapted to the reader's own applications. The second edition has been thoroughly reworked and updated to take account of advances in the field. A new set of worked examples is included. The novel aspect of the first edition was the coverage of statistical modeling using WinBUGS and OPENBUGS. This feature continues in the new edition along with examples using R to broaden appeal and for completeness of coverage.
This book presents the first part of a planned two-volume series devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes (MCPs). Interest is mainly confined to MCPs with Borel state and control (or action) spaces, and possibly unbounded costs and noncompact control constraint sets. MCPs are a class of stochastic control problems, also known as Markov decision processes, controlled Markov processes, or stochastic dynamic pro grams; sometimes, particularly when the state space is a countable set, they are also called Markov decision (or controlled Markov) chains. Regardless of the name used, MCPs appear in many fields, for example, engineering, economics, operations research, statistics, renewable and nonrenewable re source management, (control of) epidemics, etc. However, most of the lit erature (say, at least 90%) is concentrated on MCPs for which (a) the state space is a countable set, and/or (b) the costs-per-stage are bounded, and/or (c) the control constraint sets are compact. But curiously enough, the most widely used control model in engineering and economics--namely the LQ (Linear system/Quadratic cost) model-satisfies none of these conditions. Moreover, when dealing with "partially observable" systems) a standard approach is to transform them into equivalent "completely observable" sys tems in a larger state space (in fact, a space of probability measures), which is uncountable even if the original state process is finite-valued."
Hardbound. This volume is devoted to Sample Surveys, which is the most widely used method in statistical practice. It covers many theoretical and practical aspects of social and biological investigations, and is a valuable guide for those involved in designing sample surveys.
This book presents the latest findings on network theory and agent-based modeling of economic and financial phenomena. In this context, the economy is depicted as a complex system consisting of heterogeneous agents that interact through evolving networks; the aggregate behavior of the economy arises out of billions of small-scale interactions that take place via countless economic agents. The book focuses on analytical modeling, and on the econometric and statistical analysis of the properties emerging from microscopic interactions. In particular, it highlights the latest empirical and theoretical advances, helping readers understand economic and financial networks, as well as new work on modeling behavior using rich, agent-based frameworks. Innovatively, the book combines observational and theoretical insights in the form of networks and agent-based models, both of which have proved to be extremely valuable in understanding non-linear and evolving complex systems. Given its scope, the book will capture the interest of graduate students and researchers from various disciplines (e.g. economics, computer science, physics, and applied mathematics) whose work involves the domain of complexity theory.
Intended for both researchers and practitioners, this book will be a valuable resource for studying and applying recent robust statistical methods. It contains up-to-date research results in the theory of robust statistics Treats computational aspects and algorithms and shows interesting and new applications.
On May 27-31, 1985, a series of symposia was held at The University of Western Ontario, London, Canada, to celebrate the 70th birthday of Pro fessor V. M. Joshi. These symposia were chosen to reflect Professor Joshi's research interests as well as areas of expertise in statistical science among faculty in the Departments of Statistical and Actuarial Sciences, Economics, Epidemiology and Biostatistics, and Philosophy. From these symposia, the six volumes which comprise the "Joshi Festschrift" have arisen. The 117 articles in this work reflect the broad interests and high quality of research of those who attended our conference. We would like to thank all of the contributors for their superb cooperation in helping us to complete this project. Our deepest gratitude must go to the three people who have spent so much of their time in the past year typing these volumes: Jackie Bell, Lise Constant, and Sandy Tarnowski. This work has been printed from "camera ready" copy produced by our Vax 785 computer and QMS Lasergraphix printers, using the text processing software TEX. At the initiation of this project, we were neophytes in the use of this system. Thank you, Jackie, Lise, and Sandy, for having the persistence and dedication needed to complete this undertaking."
Statistical inferential methods are widely used in the study of various physical, biological, social, and other phenomena. Parametric estimation is one such method. Although there are many books which consider problems of statistical point estimation, this volume is the first to be devoted solely to the problem of unbiased estimation. It contains three chapters dealing, respectively, with the theory of point statistical estimation, techniques for constructing unbiased estimators, and applications of unbiased estimation theory. These chapters are followed by a comprehensive appendix which classifies and lists, in the form of tables, all known results relating to unbiased estimators of parameters for univariate distributions. About one thousand minimum variance unbiased estimators are listed. The volume also contains numerous examples and exercises. This volume will serve as a handbook on point unbiased estimation for researchers whose work involves statistics. It can also be recommended as a supplementary text for graduate students.
A thought-provoking look at statistical learning theory and its role in understanding human learning and inductive reasoning A joint endeavor from leading researchers in the fields of philosophy and electrical engineering, "An Elementary Introduction to Statistical Learning Theory" is a comprehensive and accessible primer on the rapidly evolving fields of statistical pattern recognition and statistical learning theory. Explaining these areas at a level and in a way that is not often found in other books on the topic, the authors present the basic theory behind contemporary machine learning and uniquely utilize its foundations as a framework for philosophical thinking about inductive inference. Promoting the fundamental goal of statistical learning, knowing what is achievable and what is not, this book demonstrates the value of a systematic methodology when used along with the needed techniques for evaluating the performance of a learning system. First, an introduction to machine learning is presented that includes brief discussions of applications such as image recognition, speech recognition, medical diagnostics, and statistical arbitrage. To enhance accessibility, two chapters on relevant aspects of probability theory are provided. Subsequent chapters feature coverage of topics such as the pattern recognition problem, optimal Bayes decision rule, the nearest neighbor rule, kernel rules, neural networks, support vector machines, and boosting. Appendices throughout the book explore the relationship between the discussed material and related topics from mathematics, philosophy, psychology, and statistics, drawing insightful connections between problems in these areas and statistical learning theory. All chapters conclude with a summary section, a set of practice questions, and a reference sections that supplies historical notes and additional resources for further study. "An Elementary Introduction to Statistical Learning Theory" is an excellent book for courses on statistical learning theory, pattern recognition, and machine learning at the upper-undergraduate and graduate levels. It also serves as an introductory reference for researchers and practitioners in the fields of engineering, computer science, philosophy, and cognitive science that would like to further their knowledge of the topic.
This textbook is an approachable introduction to statistical analysis using matrix algebra. Prior knowledge of matrix algebra is not necessary. Advanced topics are easy to follow through analyses that were performed on an open-source spreadsheet using a few built-in functions. These topics include ordinary linear regression, as well as maximum likelihood estimation, matrix decompositions, nonparametric smoothers and penalized cubic splines. Each data set (1) contains a limited number of observations to encourage readers to do the calculations themselves, and (2) tells a coherent story based on statistical significance and confidence intervals. In this way, students will learn how the numbers were generated and how they can be used to make cogent arguments about everyday matters. This textbook is designed for use in upper level undergraduate courses or first year graduate courses. The first chapter introduces students to linear equations, then covers matrix algebra, focusing on three essential operations: sum of squares, the determinant, and the inverse. These operations are explained in everyday language, and their calculations are demonstrated using concrete examples. The remaining chapters build on these operations, progressing from simple linear regression to mediational models with bootstrapped standard errors.
Biostatistics is the branch of statistics that deals with data relating to living organisms. This manual is a comprehensive guide to biostatistics for medical students. Beginning with an overview of bioethics in clinical research, an introduction to statistics, and discussion on research methodology, the following sections cover different statistical tests, data interpretation, probability, and other statistical concepts such as demographics and life tables. The final section explains report writing and applying for research grants and a chapter on 'measurement and error analysis' focuses on research papers and clinical trials. Key Points Comprehensive guide to biostatistics for medical students Covers research methodology, statistical tests, data interpretation, probability and more Includes other statistical concepts such as demographics and life tables Explains report writing and grant application in depth
The proven system for B2B sales growth from the coauthor of Predictable Revenue, the breakout bestseller hailed as "Silicon Valley's sales bible" (Inc.com) If your organization's success is driven by B2B sales, this powerhouse of a book shows you how to generate new opportunities, build sales consistently, and focus on high revenue accounts with higher probability. It's the most reliable and predictable prospecting system available, developed by the coauthor of the bestselling Predictable Revenue and the author of the international bestseller How to Deliver a TED Talk. Following a proven step-by-step framework, you can turn any B2B organization into a high-performance business development engine. You'll learn how to target and track ideal prospects, optimize contact acquisition, continually improve performance, and achieve revenue goals-quickly, efficiently, and predictably. As a bonus, you'll receive full online access to sample materials, worksheets, blueprints, and more. If you are a business professional tasked with new business development, revenue generation, diversifying marketing lead generation channels, selling into disruptive markets, and justifying marketing ROI, Predictable Prospecting will be an invaluable resource.
This book presents a large variety of extensions of the methods of inclusion and exclusion. Both methods for generating and methods for proof of such inequalities are discussed. The inequalities are utilized for finding asymptotic values and for limit theorems. Applications vary from classical probability estimates to modern extreme value theory and combinatorial counting to random subset selection. Applications are given in prime number theory, growth of digits in different algorithms, and in statistics such as estimates of confidence levels of simultaneous interval estimation. The prerequisites include the basic concepts of probability theory and familiarity with combinatorial arguments.
"Mathematics of Uncertainty" provides the basic ideas and foundations of uncertainty, covering the fields of mathematics in which uncertainty, variability, imprecision and fuzziness of data are of importance. This introductory book describes the basic ideas of the mathematical fields of uncertainty from simple interpolation to wavelets, from error propagation to fuzzy sets and neural networks. The book presents the treatment of problems of interpolation and approximation, as well as observation fuzziness which can essentially influence the preciseness and reliability of statements on functional relationships. The notions of randomness and probability are examined as a model for the variability of observation and measurement results. Besides these basic ideas the book also presents methods of qualitative data analysis such as cluster analysis and classification, and of evaluation of functional relationships such as regression analysis and quantitative fuzzy data analysis.
This book chronicles Donald Burkholder's thirty-five year study of
martingales and its consequences. Here are some of the
highlights.
When I wrote the book Quantitative Sociodynamics, it was an early attempt to make methods from statistical physics and complex systems theory fruitful for the modeling and understanding of social phenomena. Unfortunately, the ?rst edition appeared at a quite prohibitive price. This was one reason to make these chapters available again by a new edition. The other reason is that, in the meantime, many of the methods discussed in this book are more and more used in a variety of different ?elds. Among the ideas worked out in this book are: 1 * a statistical theory of binary social interactions, * a mathematical formulation of social ?eld theory, which is the basis of social 2 force models, * a microscopic foundation of evolutionary game theory, based on what is known today as 'proportional imitation rule', a stochastic treatment of interactions in evolutionary game theory, and a model for the self-organization of behavioral 3 conventions in a coordination game. It, therefore, appeared reasonable to make this book available again, but at a more affordable price. To keep its original character, the translation of this book, which 1 D. Helbing, Interrelations between stochastic equations for systems with pair interactions. Ph- icaA 181, 29-52 (1992); D. Helbing, Boltzmann-like and Boltzmann-Fokker-Planck equations as a foundation of behavioral models. PhysicaA 196, 546-573 (1993). 2 D. Helbing, Boltzmann-like and Boltzmann-Fokker-Planck equations as a foundation of beh- ioral models. PhysicaA 196, 546-573 (1993); D.
A Levy process is a continuous-time analogue of a random walk, and as such, is at the cradle of modern theories of stochastic processes. Martingales, Markov processes, and diffusions are extensions and generalizations of these processes. In the past, representatives of the Levy class were considered most useful for applications to either Brownian motion or the Poisson process. Nowadays the need for modeling jumps, bursts, extremes and other irregular behavior of phenomena in nature and society has led to a renaissance of the theory of general Levy processes. Researchers and practitioners in fields as diverse as physics, meteorology, statistics, insurance, and finance have rediscovered the simplicity of Levy processes and their enormous flexibility in modeling tails, dependence and path behavior. This volume, with an excellent introductory preface, describes the state-of-the-art of this rapidly evolving subject with special emphasis on the non-Brownian world. Leading experts present surveys of recent developments, or focus on some most promising applications. Despite its special character, every topic is aimed at the non- specialist, keen on learning about the new exciting face of a rather aged class of processes. An extensive bibliography at the end of each article makes this an invaluable comprehensive reference text. For the researcher and graduate student, every article contains open problems and points out directions for futurearch. The accessible nature of the work makes this an ideal introductory text for graduate seminars in applied probability, stochastic processes, physics, finance, and telecommunications, and a unique guide to the world of Levy processes. "
Sergei Kuznetsov is one of the top experts on measure valued branching processes (also known as "superprocesses") and their connection to nonlinear partial differential operators. His research interests range from stochastic processes and partial differential equations to mathematical statistics, time series analysis and statistical software; he has over 90 papers published in international research journals. His most well known contribution to probability theory is the "Kuznetsov-measure." A conference honoring his 60th birthday has been organized at Boulder, Colorado in the summer of 2010, with the participation of Sergei Kuznetsov's mentor and major co-author, Eugene Dynkin. The conference focused on topics related to superprocesses, branching diffusions and nonlinear partial differential equations. In particular, connections to the so-called "Kuznetsov-measure" were emphasized. Leading experts in the field as well as young researchers contributed to the conference. The meeting was organized by J. Englander and B. Rider (U. of Colorado).
These proceedings represent the current state of research on the topics 'boundary theory' and 'spectral and probability theory' of random walks on infinite graphs. They are the result of the two workshops held in Styria (Graz and St. Kathrein am Offenegg, Austria) between June 29th and July 5th, 2009. Many of the participants joined both meetings. Even though the perspectives range from very different fields of mathematics, they all contribute with important results to the same wonderful topic from structure theory, which, by extending a quotation of Laurent Saloff-Coste, could be described by 'exploration of groups by random processes'.
This book explains the foundation of approximate Bayesian computation (ABC), an approach to Bayesian inference that does not require the specification of a likelihood function. As a result, ABC can be used to estimate posterior distributions of parameters for simulation-based models. Simulation-based models are now very popular in cognitive science, as are Bayesian methods for performing parameter inference. As such, the recent developments of likelihood-free techniques are an important advancement for the field. Chapters discuss the philosophy of Bayesian inference as well as provide several algorithms for performing ABC. Chapters also apply some of the algorithms in a tutorial fashion, with one specific application to the Minerva 2 model. In addition, the book discusses several applications of ABC methodology to recent problems in cognitive science. Likelihood-Free Methods for Cognitive Science will be of interest to researchers and graduate students working in experimental, applied, and cognitive science.
This book deals with methods to evaluate scientific productivity. In the book statistical methods, deterministic and stochastic models and numerous indexes are discussed that will help the reader to understand the nonlinear science dynamics and to be able to develop or construct systems for appropriate evaluation of research productivity and management of research groups and organizations. The dynamics of science structures and systems is complex, and the evaluation of research productivity requires a combination of qualitative and quantitative methods and measures. The book has three parts. The first part is devoted to mathematical models describing the importance of science for economic growth and systems for the evaluation of research organizations of different size. The second part contains descriptions and discussions of numerous indexes for the evaluation of the productivity of researchers and groups of researchers of different size (up to the comparison of research productivities of research communities of nations). Part three contains discussions of non-Gaussian laws connected to scientific productivity and presents various deterministic and stochastic models of science dynamics and research productivity. The book shows that many famous fat tail distributions as well as many deterministic and stochastic models and processes, which are well known from physics, theory of extreme events or population dynamics, occur also in the description of dynamics of scientific systems and in the description of the characteristics of research productivity. This is not a surprise as scientific systems are nonlinear, open and dissipative. |
You may like...
Executing Windows Command Line…
Chet Hosmer, Joshua Bartolomie, …
Paperback
R1,297
Discovery Miles 12 970
System Center Configuration Manager…
Kerrie Meyler, Gerry Hampson, …
Paperback
(1)
|