![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
If you know a little bit about financial mathematics but don't yet know a lot about programming, then C++ for Financial Mathematics is for you. C++ is an essential skill for many jobs in quantitative finance, but learning it can be a daunting prospect. This book gathers together everything you need to know to price derivatives in C++ without unnecessary complexities or technicalities. It leads the reader step-by-step from programming novice to writing a sophisticated and flexible financial mathematics library. At every step, each new idea is motivated and illustrated with concrete financial examples. As employers understand, there is more to programming than knowing a computer language. As well as covering the core language features of C++, this book teaches the skills needed to write truly high quality software. These include topics such as unit tests, debugging, design patterns and data structures. The book teaches everything you need to know to solve realistic financial problems in C++. It can be used for self-study or as a textbook for an advanced undergraduate or master's level course.
Sensitivity analysis and optimal shape design are key issues in engineering that have been affected by advances in numerical tools currently available. This book, and its supplementary online files, presents basic optimization techniques that can be used to compute the sensitivity of a given design to local change, or to improve its performance by local optimization of these data. The relevance and scope of these techniques have improved dramatically in recent years because of progress in discretization strategies, optimization algorithms, automatic differentiation, software availability, and the power of personal computers. Numerical Methods in Sensitivity Analysis and Shape Optimization will be of interest to graduate students involved in mathematical modeling and simulation, as well as engineers and researchers in applied mathematics looking for an up-to-date introduction to optimization techniques, sensitivity analysis, and optimal design.
This revised edition offers an approach to information theory that is more general than the classical approach of Shannon. Classically, information is defined for an alphabet of symbols or for a set of mutually exclusive propositions (a partition of the probability space ) with corresponding probabilities adding up to 1. The new definition is given for an arbitrary cover of , i.e. for a set of possibly overlapping propositions. The generalized information concept is called novelty and it is accompanied by two concepts derived from it, designated as information and surprise, which describe "opposite" versions of novelty, information being related more to classical information theory and surprise being related more to the classical concept of statistical significance. In the discussion of these three concepts and their interrelations several properties or classes of covers are defined, which turn out to be lattices. The book also presents applications of these concepts, mostly in statistics and in neuroscience.
Gaussian linear modelling cannot address current signal processing demands. In moderncontexts, suchasIndependentComponentAnalysis(ICA), progresshasbeen made speci?cally by imposing non-Gaussian and/or non-linear assumptions. Hence, standard Wiener and Kalman theories no longer enjoy their traditional hegemony in the ?eld, revealing the standard computational engines for these problems. In their place, diverse principles have been explored, leading to a consequent diversity in the implied computational algorithms. The traditional on-line and data-intensive pre- cupations of signal processing continue to demand that these algorithms be tractable. Increasingly, full probability modelling (the so-called Bayesian approach)-or partial probability modelling using the likelihood function-is the pathway for - sign of these algorithms. However, the results are often intractable, and so the area of distributional approximation is of increasing relevance in signal processing. The Expectation-Maximization (EM) algorithm and Laplace approximation, for ex- ple, are standard approaches to handling dif?cult models, but these approximations (certainty equivalence, and Gaussian, respectively) are often too drastic to handle the high-dimensional, multi-modal and/or strongly correlated problems that are - countered. Since the 1990s, stochastic simulation methods have come to dominate Bayesian signal processing. Markov Chain Monte Carlo (MCMC) sampling, and - lated methods, are appreciated for their ability to simulate possibly high-dimensional distributions to arbitrary levels of accuracy. More recently, the particle ?ltering - proach has addressed on-line stochastic simulation. Nevertheless, the wider acce- ability of these methods-and, to some extent, Bayesian signal processing itself- has been undermined by the large computational demands they typically mak
Written to reveal statistical deceptions often thrust upon
unsuspecting journalists, this book views the use of numbers from a
public perspective. Illustrating how the statistical naivete of
journalists often nourishes quantitative misinformation, the
author's intent is to make journalists more critical appraisers of
numerical data so that in reporting them they do not deceive the
public. The book frequently uses actual reported examples of
misused statistical data reported by mass media and describes how
journalists can avoid being taken in by them. Because reports of
survey findings seldom give sufficient detail of methods on the
actual questions asked, this book elaborates on questions reporters
should ask about methodology and how to detect biased questions
before reporting the findings to the public. As such, it may be
looked upon as an elements of style for reporting statistics.
Meet Norm. He's 31, 5'9", just over 13 stone, and works a 39 hour week. He likes a drink, doesn't do enough exercise and occasionally treats himself to a bar of chocolate (milk). He's a pretty average kind of guy. In fact, he is the average guy in this clever and unusual take on statistical risk, chance, and how these two factors affect our everyday choices. Watch as Norm (who, like all average specimens, feels himself to be uniquely special), and his friends careful Prudence and reckless Kelvin, turns to statistics to help him in life's endless series of choices - should I fly or take the train? Have a baby? Another drink? Or another sausage? Do a charity skydive or get a lift on a motorbike? Because chance and risk aren't just about numbers - it's about what we believe, who we trust and how we feel about the world around us. From a world expert in risk and the bestselling author of The Tiger That Isn't (and creator of BBC Radio 4's More or Less), this is a commonsense (and wildly entertaining) guide to personal risk and decoding the statistics that represent it.
This book provides a comprehensive guidance for the use of sound statistical methods and for the evaluation of fatigue data of welded components and structures obtained under constant amplitude loading and used to produce S-N curves. Recommendations for analyzing fatigue data are available, although they do not deal with all the statistical treatments that may be required to utilize fatigue test results, and none of them offers specific guidelines for analyzing fatigue data obtained from tests on welded specimens. For an easy use, working sheets are provided to assist in the proper statistical assessment of experimental fatigue data concerning practical problems giving the procedure and a numerical application as illustration.Â
This book introduces the fundamentals of the technology satisfaction model (TSM), supporting readers in applying the Rasch model and Structural Equation Modelling (SEM) - a multivariate technique - to higher education (HE) research. User satisfaction is traditionally measured along a single dimension. However, the TSM includes digital technologies for teaching, learning and research across three dimensions: computer efficacy, perceived ease of use and perceived usefulness. Establishing relationships among these factors is a challenge. Although commonly used in psychology to trace relationships, Rasch and SEM approaches are rarely used in educational technology or library and information science. This book, therefore, shows that combining these two analytical tools offers researchers better options for measurement and generalization in HE research. This title presents theoretical and methodological insight of use to researchers in HE.
Summarizes information scattered in the technical literature on a subject too new to be included in most textbooks, but which is of interest to statisticians, and those who use statistics in science and education, at an advanced undergraduate or higher level. Overviews recent research on constructin
This book provides a new grade methodology for intelligent data analysis. It introduces a specific infrastructure of concepts needed to describe data analysis models and methods. This monograph is the only book presently available covering both the theory and application of grade data analysis and therefore aiming both at researchers, students, as well as applied practitioners. The text is richly illustrated through examples and case studies and includes a short introduction to software implementing grade methods, which can be downloaded from the editors.
The proceedings of this conference contain keynote addresses on recent developments in geotechnical reliability and limit state design in geotechnics. It also contains invited lectures on such topics as modelling of soil variability, simulation of random fields and probability of rock joints. Contents: Keynote addresses on recent development on geotechnical reliability and limit state design in geotechnics, and invited lectures on modelling of soil variability, simulation of random field, probabilistic of rock joints, and probabilistic design of foundations and slopes. Other papers on analytical techniques in geotechnical reliability, modelling of soil properties, and probabilistic analysis of slopes, embankments and foundations.
The first account in book form of all the essential features of the quasi-likelihood methodology, stressing its value as a general purpose inferential tool. The treatment is rather informal, emphasizing essential principles rather than detailed proofs, and readers are assumed to have a firm grounding in probability and statistics at the graduate level. Many examples of the use of the methods in both classical statistical and stochastic process contexts are provided.
Combinatorial (or discrete) optimization is one of the most active fields in the interface of operations research, computer science, and applied math ematics. Combinatorial optimization problems arise in various applications, including communications network design, VLSI design, machine vision, air line crew scheduling, corporate planning, computer-aided design and man ufacturing, database query design, cellular telephone frequency assignment, constraint directed reasoning, and computational biology. Furthermore, combinatorial optimization problems occur in many diverse areas such as linear and integer programming, graph theory, artificial intelligence, and number theory. All these problems, when formulated mathematically as the minimization or maximization of a certain function defined on some domain, have a commonality of discreteness. Historically, combinatorial optimization starts with linear programming. Linear programming has an entire range of important applications including production planning and distribution, personnel assignment, finance, alloca tion of economic resources, circuit simulation, and control systems. Leonid Kantorovich and Tjalling Koopmans received the Nobel Prize (1975) for their work on the optimal allocation of resources. Two important discover ies, the ellipsoid method (1979) and interior point approaches (1984) both provide polynomial time algorithms for linear programming. These algo rithms have had a profound effect in combinatorial optimization. Many polynomial-time solvable combinatorial optimization problems are special cases of linear programming (e.g. matching and maximum flow). In addi tion, linear programming relaxations are often the basis for many approxi mation algorithms for solving NP-hard problems (e.g. dual heuristics)."
In establishing a framework for dealing with uncertainties in software engineering, and for using quantitative measures in related decision-making, this text puts into perspective the large body of work having statistical content that is relevant to software engineering. Aimed at computer scientists, software engineers, and reliability analysts who have some exposure to probability and statistics, the content is pitched at a level appropriate for research workers in software reliability, and for graduate level courses in applied statistics computer science, operations research, and software engineering.
A state-of-the-art edited survey covering all aspects of sampling theory. Theory, methods and applications are discussed in authoritative expositions ranging from multi-dimensional signal analysis to wavelet transforms. The book is an essential up-to-date resource.
Support achievement in the latest syllabus (9709), for examination from 2020, with a stretching, practice-driven approach that builds the advanced skills required for Cambridge exam success and progression to further study. This new edition is fully aligned with the Probability & Statistics 1 part of the latest International AS & A Level syllabus, and contains a comprehensive mapping grid so you can be sure of complete support. Get students ready for higher education with a focus on real-world application. From parabolic reflectors to technology in sport, up-to-date, international examples show how mathematics is used in real life. Students have plenty of opportunities to hone their skills with extensive graduated practice and thorough worked examples. Plus, give students realistic practice for their exams with exam-style questions covering every topic. Answers are included in the back of the book with full step-by-step solutions for all exercises and exam-style questions available on the accompanying support site. The online Student Book will be available on Oxford Education Bookshelf until 2028. Access is facilitated via a unique code, which is sent in the mail. The code must be linked to an email address, creating a user account. Access may be transferred once to a new user, once the initial user no longer requires access. You will need to contact your local Educational Consultant to arrange this.
Support achievement in the latest syllabus (9709), for examination from 2020, with a stretching, practice-driven approach that builds the advanced skills required for Cambridge exam success and progression to further study. This new edition is fully aligned with the Probability & Statistics 2 part of the latest International AS & A Level syllabus, and contains a comprehensive mapping grid so you can be sure of complete support. Get students ready for higher education with a focus on real world application. From parabolic reflectors to technology in sport, up-to-date, international examples show how mathematics is used in real life. Students have plenty of opportunities to hone their skills with extensive graduated practice and thorough worked examples. Plus, give students realistic practice for their exams with exam-style questions covering every topic. Answers are included in the back of the book with full step-by-step solutions for all exercises and exam-style questions available on the accompanying support site. The online Student Book will be available on Oxford Education Bookshelf until 2028. Access is facilitated via a unique code, which is sent in the mail. The code must be linked to an email address, creating a user account. Access may be transferred once to a new user, once the initial user no longer requires access. You will need to contact your local Educational Consultant to arrange this.
Description of basic ROC methodology; R and STATA code Example Datasets Not too technical Many topics not included in other books
This book deals with the development of methodology for the analysis of truncated and censored sample data. It is primarily intended as a handbook for practitioners who need simple and efficient methods for the analysis of incomplete sample data.
Features Provides a uniquely historical perspective on the mathematical underpinnings of a comprehensive list of games Suitable for a broad audience of differing mathematical levels. Anyone with a passion for games, game theory, and mathematics will enjoy this book, whether they be students, academics, or game enthusiasts Covers a wide selection of topics at a level that can be appreciated on a historical, recreational, and mathematical level.
Maintaining the excellent features that made the first edition so popular, this outstanding reference/text presents the only comprehensive treatment of the theory of point processes and statistical inference for point processes-highlighting both pointprocesses on the real line and sp;,.tial point processes. Thoroughly updated and revised to reflect changes since publication of the firstedition, the expanded Second EdiLion now contains a better organized and easierto-understand treatment of stationary point processes ... expanded treatment ofthe multiplicative intensity model ... expanded treatment of survival analysis . ..broadened consideration of applications ... an expanded and extended bibliographywith over 1,000 references ... and more than 3('() end-of-chapter exercises.
Next Generation Sequencing (NGS) is the latest high throughput technology to revolutionize genomic research. NGS generates massive genomic datasets that play a key role in the big data phenomenon that surrounds us today. To extract signals from high-dimensional NGS data and make valid statistical inferences and predictions, novel data analytic and statistical techniques are needed. This book contains 20 chapters written by prominent statisticians working with NGS data. The topics range from basic preprocessing and analysis with NGS data to more complex genomic applications such as copy number variation and isoform expression detection. Research statisticians who want to learn about this growing and exciting area will find this book useful. In addition, many chapters from this book could be included in graduate-level classes in statistical bioinformatics for training future biostatisticians who will be expected to deal with genomic data in basic biomedical research, genomic clinical trials and personalized medicine. About the editors: Somnath Datta is Professor and Vice Chair of Bioinformatics and Biostatistics at the University of Louisville. He is Fellow of the American Statistical Association, Fellow of the Institute of Mathematical Statistics and Elected Member of the International Statistical Institute. He has contributed to numerous research areas in Statistics, Biostatistics and Bioinformatics. Dan Nettleton is Professor and Laurence H. Baker Endowed Chair of Biological Statistics in the Department of Statistics at Iowa State University. He is Fellow of the American Statistical Association and has published research on a variety of topics in statistics, biology and bioinformatics."
This is the first technical book that considers tests as public tools and examines how to engineer and process test data, extract the structure within the data to be visualized, and thereby make test results useful for students, teachers, and the society. The author does not differentiate test data analysis from data engineering and information visualization. This monograph introduces the following methods of engineering or processing test data, including the latest machine learning techniques: classical test theory (CTT), item response theory (IRT), latent class analysis (LCA), latent rank analysis (LRA), biclustering (co-clustering), and Bayesian network model (BNM). CTT and IRT are methods for analyzing test data and evaluating students' abilities on a continuous scale. LCA and LRA assess examinees by classifying them into nominal and ordinal clusters, respectively, where the adequate number of clusters is estimated from the data. Biclustering classifies examinees into groups (latent clusters) while classifying items into fields (factors). Particularly, the infinite relational model discussed in this book is a biclustering method feasible under the condition that neither the number of groups nor the number of fields is known beforehand. Additionally, the local dependence LRA, local dependence biclustering, and bicluster network model are methods that search and visualize inter-item (or inter-field) network structure using the mechanism of BNM. As this book offers a new perspective on test data analysis methods, it is certain to widen readers' perspective on test data analysis.
This book provides a brief but accessible introduction to a set of related, mathematical ideas that have proved useful in understanding the brain and behaviour. If you record the eye movements of a group of people watching a riverside scene then some will look at the river, some will look at the barge by the side of the river, some will look at the people on the bridge, and so on, but if a duck takes off then everybody will look at it. How come the brain is so adept at processing such biological objects? In this book it is shown that brains are especially suited to exploiting the geometric properties of such objects. Central to the geometric approach is the concept of a manifold, which extends the idea of a surface to many dimensions. The manifold can be specified by collections of n-dimensional data points or by the paths of a system through state space. Just as tangent planes can be used to analyse the local linear behaviour of points on a surface, so the extension to tangent spaces can be used to investigate the local linear behaviour of manifolds. The majority of the geometric techniques introduced are all about how to do things with tangent spaces. Examples of the geometric approach to neuroscience include the analysis of colour and spatial vision measurements and the control of eye and arm movements. Additional examples are used to extend the applications of the approach and to show that it leads to new techniques for investigating neural systems. An advantage of following a geometric approach is that it is often possible to illustrate the concepts visually and all the descriptions of the examples are complemented by comprehensively captioned diagrams. The book is intended for a reader with an interest in neuroscience who may have been introduced to calculus in the past but is not aware of the many insights obtained by a geometric approach to the brain. Appendices contain brief reviews of the required background knowledge in neuroscience and calculus. |
You may like...
Integrated Population Biology and…
Arni S.R. Srinivasa Rao, C.R. Rao
Hardcover
R6,219
Discovery Miles 62 190
Statistics for Management and Economics
Gerald Keller, Nicoleta Gaciu
Paperback
Order Statistics: Applications, Volume…
Narayanaswamy Balakrishnan, C.R. Rao
Hardcover
R3,377
Discovery Miles 33 770
|