![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
In applications, and especially in mathematical finance, random
time-dependent events are often modeled as stochastic processes.
Assumptions are made about the structure of such processes, and
serious researchers will want to justify those assumptions through
the use of data. As statisticians are wont to say, "In God we
trust; all others must bring data."
Most applications generate large datasets, like social networking and social influence programs, smart cities applications, smart house environments, Cloud applications, public web sites, scientific experiments and simulations, data warehouse, monitoring platforms, and e-government services. Data grows rapidly, since applications produce continuously increasing volumes of both unstructured and structured data. Large-scale interconnected systems aim to aggregate and efficiently exploit the power of widely distributed resources. In this context, major solutions for scalability, mobility, reliability, fault tolerance and security are required to achieve high performance and to create a smart environment. The impact on data processing, transfer and storage is the need to re-evaluate the approaches and solutions to better answer the user needs. A variety of solutions for specific applications and platforms exist so a thorough and systematic analysis of existing solutions for data science, data analytics, methods and algorithms used in Big Data processing and storage environments is significant in designing and implementing a smart environment. Fundamental issues pertaining to smart environments (smart cities, ambient assisted leaving, smart houses, green houses, cyber physical systems, etc.) are reviewed. Most of the current efforts still do not adequately address the heterogeneity of different distributed systems, the interoperability between them, and the systems resilience. This book will primarily encompass practical approaches that promote research in all aspects of data processing, data analytics, data processing in different type of systems: Cluster Computing, Grid Computing, Peer-to-Peer, Cloud/Edge/Fog Computing, all involving elements of heterogeneity, having a large variety of tools and software to manage them. The main role of resource management techniques in this domain is to create the suitable frameworks for development of applications and deployment in smart environments, with respect to high performance. The book focuses on topics covering algorithms, architectures, management models, high performance computing techniques and large-scale distributed systems.
The aim of this graduate textbook is to provide a comprehensive advanced course in the theory of statistics covering those topics in estimation, testing, and large sample theory which a graduate student might typically need to learn as preparation for work on a Ph.D. An important strength of this book is that it provides a mathematically rigorous and even-handed account of both Classical and Bayesian inference in order to give readers a broad perspective. For example, the "uniformly most powerful" approach to testing is contrasted with available decision-theoretic approaches.
Chances Are is the first book to make statistics accessible to everyone, regardless of how much math you remember from school. Do percentages confuse you? Can you tell the difference among a mean, median, and mode? Steve Slavin can help With Chances Are, you can actually teach yourself all the statistics you will ever need.
Handbook of Alternative Data in Finance, Volume I motivates and challenges the reader to explore and apply Alternative Data in finance. The book provides a robust and in-depth overview of Alternative Data, including its definition, characteristics, difference from conventional data, categories of Alternative Data, Alternative Data providers, and more. The book also offers a rigorous and detailed exploration of process, application and delivery that should be practically useful to researchers and practitioners alike. Features Includes cutting edge applications in machine learning, fintech, and more Suitable for professional quantitative analysts, and as a resource for postgraduates and researchers in financial mathematics Features chapters from many leading researchers and practitioners.
Machine learning is a novel discipline concerned with the analysis of large and multiple variables data. It involves computationally intensive methods, like factor analysis, cluster analysis, and discriminant analysis. It is currently mainly the domain of computer scientists, and is already commonly used in social sciences, marketing research, operational research and applied sciences. It is virtually unused in clinical research. This is probably due to the traditional belief of clinicians in clinical trials where multiple variables are equally balanced by the randomization process and are not further taken into account. In contrast, modern computer data files often involve hundreds of variables like genes and other laboratory values, and computationally intensive methods are required. This book was written as a hand-hold presentation accessible to clinicians, and as a must-read publication for those new to the methods.
This book gives a self-contained introduction to the dynamic martingale approach to marked point processes (MPP). Based on the notion of a compensator, this approach gives a versatile tool for analyzing and describing the stochastic properties of an MPP. In particular, the authors discuss the relationship of an MPP to its compensator and particular classes of MPP are studied in great detail. The theory is applied to study properties of dependent marking and thinning, to prove results on absolute continuity of point process distributions, to establish sufficient conditions for stochastic ordering between point and jump processes, and to solve the filtering problem for certain classes of MPPs.
An Introduction to R and Python for Data Analysis helps teach students to code in both R and Python simultaneously. As both R and Python can be used in similar manners, it is useful and efficient to learn both at the same time, helping lecturers and students to teach and learn more, save time, whilst reinforcing the shared concepts and differences of the systems. This tandem learning is highly useful for students, helping them to become literate in both languages, and develop skills which will be handy after their studies. This book presumes no prior experience with computing, and is intended to be used by students from a variety of backgrounds. The side-by-side formatting of this book helps introductory graduate students quickly grasp the basics of R and Python, with the exercises providing helping them to teach themselves the skills they will need upon the completion of their course, as employers now ask for competency in both R and Python. Teachers and lecturers will also find this book useful in their teaching, providing a singular work to help ensure their students are well trained in both computer languages. All data for exercises can be found here: https://github.com/tbrown122387/r_and_python_book/tree/master/data. Key features: - Teaches R and Python in a "side-by-side" way. - Examples are tailored to aspiring data scientists and statisticians, not software engineers. - Designed for introductory graduate students. - Does not assume any mathematical background.
This monograph provides, for the first time, a most comprehensive statistical account of composite sampling as an ingenious environmental sampling method to help accomplish observational economy in a variety of environmental and ecological studies. Sampling consists of selection, acquisition, and quantification of a part of the population. But often what is desirable is not affordable, and what is affordable is not adequate. How do we deal with this dilemma? Operationally, composite sampling recognizes the distinction between selection, acquisition, and quantification. In certain applications, it is a common experience that the costs of selection and acquisition are not very high, but the cost of quantification, or measurement, is substantially high. In such situations, one may select a sample sufficiently large to satisfy the requirement of representativeness and precision and then, by combining several sampling units into composites, reduce the cost of measurement to an affordable level. Thus composite sampling offers an approach to deal with the classical dilemma of desirable versus affordable sample sizes, when conventional statistical methods fail to resolve the problem. Composite sampling, at least under idealized conditions, incurs no loss of information for estimating the population means. But an important limitation to the method has been the loss of information on individual sample values, such as the extremely large value. In many of the situations where individual sample values are of interest or concern, composite sampling methods can be suitably modified to retrieve the information on individual sample values that may be lost due to compositing. In this monograph, we present statistical solutions to these and other issues that arise in the context of applications of composite sampling. Content Level Research
Bayesian nonparametrics has grown tremendously in the last three decades, especially in the last few years. This book is the first systematic treatment of Bayesian nonparametric methods and the theory behind them. While the book is of special interest to Bayesians, it will also appeal to statisticians in general because Bayesian nonparametrics offers a whole continuous spectrum of robust alternatives to purely parametric and purely nonparametric methods of classical statistics. The book is primarily aimed at graduate students and can be used as the text for a graduate course in Bayesian nonparametrics. Though the emphasis of the book is on nonparametrics, there is a substantial chapter on asymptotics of classical Bayesian parametric models. Jayanta Ghosh has been Director and Jawaharlal Nehru Professor at the Indian Statistical Institute and President of the International Statistical Institute. He is currently professor of statistics at Purdue University. He has been editor of Sankhya and served on the editorial boards of several journals including the Annals of Statistics. Apart from Bayesian analysis, his interests include asymptotics, stochastic modeling, high dimensional model selection, reliability and survival analysis and bioinformatics. R.V. Ramamoorthi is professor at the Department of Statistics and Probability at Michigan State University. He has published papers in the areas of sufficiency invariance, comparison of experiments, nonparametric survival analysis and Bayesian analysis. In addition to Bayesian nonparametrics, he is currently interested in Bayesian networks and graphical models. He is on the editorial board of Sankhya.
Over the last fifteen years fractal geometry has established itself as a substantial mathematical theory in its own right. The interplay between fractal geometry, analysis and stochastics has highly influenced recent developments in mathematical modeling of complicated structures. This process has been forced by problems in these areas related to applications in statistical physics, biomathematics and finance. This book is a collection of survey articles covering many of the most recent developments, like Schramm-Loewner evolution, fractal scaling limits, exceptional sets for percolation, and heat kernels on fractals. The authors were the keynote speakers at the conference "Fractal Geometry and Stochastics IV" at Greifswald in September 2008.
For junior/senior undergraduates taking probability and statistics as applied to engineering, science, or computer science. This classic text provides a rigorous introduction to basic probability theory and statistical inference, with a unique balance between theory and methodology. Interesting, relevant applications use real data from actual studies, showing how the concepts and methods can be used to solve problems in the field. This revision focuses on improved clarity and deeper understanding.
Provides a comprehensive and accessible introduction to general insurance pricing, based on the author’s many years of experience as both a teacher and practitioner. Suitable for students taking a course in general insurance pricing, notably if they are studying to become an actuary through the UK Institute of Actuaries exams. No other title quite like this on the market that is perfect for teaching/study, and is also an excellent guide for practitioners.
Stochastic control theory is a relatively young branch of mathematics. The beginning of its intensive development falls in the late 1950s and early 1960s. During that period an extensive literature appeared on optimal stochastic control using the quadratic performance criterion (see references in W onham [76J). At the same time, Girsanov [25J and Howard [26J made the first steps in constructing a general theory, based on Bellman's technique of dynamic programming, developed by him somewhat earlier [4J. Two types of engineering problems engendered two different parts of stochastic control theory. Problems of the first type are associated with multistep decision making in discrete time, and are treated in the theory of discrete stochastic dynamic programming. For more on this theory, we note in addition to the work of Howard and Bellman, mentioned above, the books by Derman [8J, Mine and Osaki [55J, and Dynkin and Yushkevich [12]. Another class of engineering problems which encouraged the development of the theory of stochastic control involves time continuous control of a dynamic system in the presence of random noise. The case where the system is described by a differential equation and the noise is modeled as a time continuous random process is the core of the optimal control theory of diffusion processes. This book deals with this latter theory.
The primary aims of this book are to provide modern statistical techniques and theory for stochastic processes. The stochastic processes mentioned here are not restricted to the usual AR, MA and ARMA processes. A wide variety of stochastic processes, e.g., non-Gaussian linear processes, long-memory processes, nonlinear processes, non-ergodic processes and diffusion processes are described. The authors discuss the usual estimation and testing theory and also many other statistical methods and techniques, e.g., discriminant analysis, nonparametric methods, semiparametric approaches, higher order asymptotic theory in view of differential geometry, large deviation principle and saddlepoint approximation. Because it is difficult to use the exact distribution theory, the discussion is based on the asymptotic theory. The optimality of various procedures is often shown by use of the local asymptotic normality (LAN) which is due to Le Cam. The LAN gives a unified view for the time series asymptotic theory.
Accurate and efficient computer algorithms for factoring matrices, solving linear systems of equations, and extracting eigenvalues and eigenvectors. Regardless of the software system used, the book describes and gives examples of the use of modern computer software for numerical linear algebra. It begins with a discussion of the basics of numerical computations, and then describes the relevant properties of matrix inverses, factorisations, matrix and vector norms, and other topics in linear algebra. The book is essentially self- contained, with the topics addressed constituting the essential material for an introductory course in statistical computing. Numerous exercises allow the text to be used for a first course in statistical computing or as supplementary text for various courses that emphasise computations.
Statistical Applications for Environmental Analysis and Risk Assessment guides readers through real-world situations and the best statistical methods used to determine the nature and extent of the problem, evaluate the potential human health and ecological risks, and design and implement remedial systems as necessary. Featuring numerous worked examples using actual data and ready-made software scripts, Statistical Applications for Environmental Analysis and Risk Assessment also includes: Descriptions of basic statistical concepts and principles in an informal style that does not presume prior familiarity with the subject Detailed illustrations of statistical applications in the environmental and related water resources fields using real-world data in the contexts that would typically be encountered by practitioners Software scripts using the high-powered statistical software system, R, and supplemented by USEPA s ProUCL and USDOE s VSP software packages, which are all freely available Coverage of frequent data sample issues such as non-detects, outliers, skewness, sustained and cyclical trend that habitually plague environmental data samples Clear demonstrations of the crucial, but often overlooked, role of statistics in environmental sampling design and subsequent exposure risk assessment.
Geometric Data Analysis (GDA) is the name suggested by P. Suppes (Stanford University) to designate the approach to Multivariate Statistics initiated by BenzA(c)cri as Correspondence Analysis, an approach that has become more and more used and appreciated over the years. This book presents the full formalization of GDA in terms of linear algebra - the most original and far-reaching consequential feature of the approach - and shows also how to integrate the standard statistical tools such as Analysis of Variance, including Bayesian methods. Chapter 9, Research Case Studies, is nearly a book in itself; it presents the methodology in action on three extensive applications, one for medicine, one from political science, and one from education (data borrowed from the Stanford computer-based Educational Program for Gifted Youth ). Thus the readership of the book concerns both mathematicians interested in the applications of mathematics, and researchers willing to master an exceptionally powerful approach of statistical data analysis.
The study of scan statistics and their applications to many different scientific and engineering problems have received considerable attention in the literature recently. In addition to challenging theoretical problems, the area of scan statis tics has also found exciting applications in diverse disciplines such as archaeol ogy, astronomy, epidemiology, geography, material science, molecular biology, reconnaissance, reliability and quality control, sociology, and telecommunica tion. This will be clearly evident when one goes through this volume. In this volume, we have brought together a collection of experts working in this area of research in order to review some of the developments that have taken place over the years and also to present their new works and point out some open problems. With this in mind, we selected authors for this volume with some having theoretical interests and others being primarily concerned with applications of scan statistics. Our sincere hope is that this volume will thus provide a comprehensive survey of all the developments in this area of research and hence will serve as a valuable source as well as reference for theoreticians and applied researchers. Graduate students interested in this area will find this volume to be particularly useful as it points out many open challenging problems that they could pursue. This volume will also be appropriate for teaching a graduate-level special course on this topic."
Modern statistics consists of methods which help in drawing inferences about the population under consideration. These populations may actually exist, or could be generated by repeated. experimentation. The medium of drawing inferences about the population is the sample, which is a subset of measurements selected from the population. Each measurement in the sample is used for making inferences about the population. The populations and also the methods of sample selection differ from one field of science to the other. Social scientists use surveys tocollectthe sample information, whereas the physical scientists employ the method of experimentation for obtaining this information. This is because in social sciences the factors that cause variation in the measurements on the study variable for the population units can not be controlled, whereas in physical sciences these factors can be controlled, at least to some extent, through proper experimental design. Several excellent books on sampling theory are available in the market. These books discuss the theory of sample surveys in great depth and detail, and are suited to the postgraduate students majoring in statistics. Research workers in the field of sampling methodology can also make use of these books. However, not many suitable books are available, which can be used by the students and researchers in the fields of economics, social sciences, extension education, agriculture, medical sciences, business management, etc. These students and workers usually conduct sample surveys during their research projects."
One of the main difficulties of applying an evolutionary algorithm
(or, as a matter of fact, any heuristic method) to a given problem
is to decide on an appropriate set of parameter values. Typically
these are specified before the algorithm is run and include
population size, selection rate, operator probabilities, not to
mention the representation and the operators themselves. This book
gives the reader a solid perspective on the different approaches
that have been proposed to automate control of these parameters as
well as understanding their interactions. The book covers a broad
area of evolutionary computation, including genetic algorithms,
evolution strategies, genetic programming, estimation of
distribution algorithms, and also discusses the issues of specific
parameters used in parallel implementations, multi-objective
evolutionary algorithms, and practical consideration for real-world
applications. It is a recommended read for researchers and
practitioners of evolutionary computation and heuristic
methods.
This book is intended as a text for graduate students and as a reference for workers in probability and statistics. The prerequisite is honest calculus. The material covered in Parts Two to Five inclusive requires about three to four semesters of graduate study. The introductory part may serve as a text for an undergraduate course in elementary probability theory. Numerous historical marks about results, methods, and the evolution of various fields are an intrinsic part of the text. About a third of the second volume is devoted to conditioning and properties of sequences of various types of dependence. The other two thirds are devoted to random functions; the last Part on Elements of random analysis is more sophisticated.
|
You may like...
Goodnight Golda - A Handbook For Brave…
Batya Bricker, Ilana Stein
Paperback
Outlines of Ecclesiastical History - on…
Charles Augustus Goodrich
Paperback
R605
Discovery Miles 6 050
|