Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Science & Mathematics > Mathematics > Probability & statistics
Analysis of Genetic Association Studies is both a graduate level textbook in statistical genetics and genetic epidemiology, and a reference book for the analysis of genetic association studies. Students, researchers, and professionals will find the topics introduced in Analysis of Genetic Association Studies particularly relevant. The book is applicable to the study of statistics, biostatistics, genetics and genetic epidemiology. In addition to providing derivations, the book uses real examples and simulations to illustrate step-by-step applications. Introductory chapters on probability and genetic epidemiology terminology provide the reader with necessary background knowledge. The organization of this work allows for both casual reference and close study.
Separation of signal from noise is the most fundamental problem in data analysis, and arises in many fields, for example, signal processing, econometrics, acturial science, and geostatistics. This book introduces the local regression method in univariate and multivariate settings, and extensions to local likelihood and density estimation. Basic theoretical results and diagnostic tools such as cross validation are introduced along the way. Examples illustrate the implementation of the methods using the LOCFIT software.
This book critically reflects on current statistical methods used in Human-Computer Interaction (HCI) and introduces a number of novel methods to the reader. Covering many techniques and approaches for exploratory data analysis including effect and power calculations, experimental design, event history analysis, non-parametric testing and Bayesian inference; the research contained in this book discusses how to communicate statistical results fairly, as well as presenting a general set of recommendations for authors and reviewers to improve the quality of statistical analysis in HCI. Each chapter presents [R] code for running analyses on HCI examples and explains how the results can be interpreted. Modern Statistical Methods for HCI is aimed at researchers and graduate students who have some knowledge of "traditional" null hypothesis significance testing, but who wish to improve their practice by using techniques which have recently emerged from statistics and related fields. This book critically evaluates current practices within the field and supports a less rigid, procedural view of statistics in favour of fair statistical communication.
Mathematical programming has know a spectacular diversification in the last few decades. This process has happened both at the level of mathematical research and at the level of the applications generated by the solution methods that were created. To write a monograph dedicated to a certain domain of mathematical programming is, under such circumstances, especially difficult. In the present monograph we opt for the domain of fractional programming. Interest of this subject was generated by the fact that various optimization problems from engineering and economics consider the minimization of a ratio between physical and/or economical functions, for example cost/time, cost/volume, cost/profit, or other quantities that measure the efficiency of a system. For example, the productivity of industrial systems, defined as the ratio between the realized services in a system within a given period of time and the utilized resources, is used as one of the best indicators of the quality of their operation. Such problems, where the objective function appears as a ratio of functions, constitute fractional programming problem. Due to its importance in modeling various decision processes in management science, operational research, and economics, and also due to its frequent appearance in other problems that are not necessarily economical, such as information theory, numerical analysis, stochastic programming, decomposition algorithms for large linear systems, etc., the fractional programming method has received particular attention in the last three decade
Statistical models and methods for lifetime and other time-to-event data are widely used in many fields, including medicine, the environmental sciences, actuarial science, engineering, economics, management, and the social sciences. For example, closely related statistical methods have been applied to the study of the incubation period of diseases such as AIDS, the remission time of cancers, life tables, the time-to-failure of engineering systems, employment duration, and the length of marriages. This volume contains a selection of papers based on the 1994 International Research Conference on Lifetime Data Models in Reliability and Survival Analysis, held at Harvard University. The conference brought together a varied group of researchers and practitioners to advance and promote statistical science in the many fields that deal with lifetime and other time-to-event-data. The volume illustrates the depth and diversity of the field. A few of the authors have published their conference presentations in the new journal Lifetime Data Analysis (Kluwer Academic Publishers).
This monograph aims to promote original mathematical methods to determine the invariant measure of two-dimensional random walks in domains with boundaries. Such processes arise in numerous applications and are of interest in several areas of mathematical research, such as Stochastic Networks, Analytic Combinatorics, and Quantum Physics. This second edition consists of two parts. Part I is a revised upgrade of the first edition (1999), with additional recent results on the group of a random walk. The theoretical approach given therein has been developed by the authors since the early 1970s. By using Complex Function Theory, Boundary Value Problems, Riemann Surfaces, and Galois Theory, completely new methods are proposed for solving functional equations of two complex variables, which can also be applied to characterize the Transient Behavior of the walks, as well as to find explicit solutions to the one-dimensional Quantum Three-Body Problem, or to tackle a new class of Integrable Systems. Part II borrows special case-studies from queueing theory (in particular, the famous problem of Joining the Shorter of Two Queues) and enumerative combinatorics (Counting, Asymptotics). Researchers and graduate students should find this book very useful.
This book is the third edition of a successful textbook for upper-undergraduate and early graduate students, which offers a solid foundation in probability theory and statistics and their application to physical sciences, engineering, biomedical sciences and related disciplines. It provides broad coverage ranging from conventional textbook content of probability theory, random variables, and their statistics, regression, and parameter estimation, to modern methods including Monte-Carlo Markov chains, resampling methods and low-count statistics. In addition to minor corrections and adjusting structure of the content, particular features in this new edition include: Python codes and machine-readable data for all examples, classic experiments, and exercises, which are now more accessible to students and instructors New chapters on low-count statistics including the Poisson-based Cash statistic for regression in the low-count regime, and on contingency tables and diagnostic testing. An additional example of classic experiments based on testing data for SARS-COV-2 to demonstrate practical applications of the described statistical methods. This edition inherits the main pedagogical method of earlier versions-a theory-then-application approach-where emphasis is placed first on a sound understanding of the underlying theory of a topic, which becomes the basis for an efficient and practical application of the materials. Basic calculus is used in some of the derivations, and no previous background in probability and statistics is required. The book includes many numerical tables of data as well as exercises and examples to aid the readers' understanding of the topic.
This book offers solutions to such topical problems as developing mathematical models and descriptions of typical distortions in applied forecasting problems; evaluating robustness for traditional forecasting procedures under distortionism and more.
One service mathematics has rc: ndered the 'Et moi, "', si j'avait su comment CD revenir, je n'y serais point alle. ' human race. It has put common SCIIJC back Jules Verne where it belongs. on the topmost shelf next to tbe dusty canister 1abdled 'discarded non- The series is divergent; tberefore we may be sense'. able to do sometbing witb it Eric T. Bell O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non linearities abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathematical physics . . . '; 'One service logic has rendered com puter science . . . '; 'One service category theory has rendered mathematics . . . '. All arguably true_ And all statements obtainable this way form part of the raison d'etre of this series_ This series, Mathematics and Its ApplicatiOns, started in 1977. Now that over one hundred volumes have appeared it seems opportune to reexamine its scope_ At the time I wrote "Growing specialization and diversification have brought a host of monographs and textbooks on increasingly specialized topics. However, the 'tree' of knowledge of mathematics and related fields does not grow only by putting forth new branches."
The series is devoted to the publication of monographs and high-level textbooks in mathematics, mathematical methods and their applications. Apart from covering important areas of current interest, a major aim is to make topics of an interdisciplinary nature accessible to the non-specialist. The works in this series are addressed to advanced students and researchers in mathematics and theoretical physics. In addition, it can serve as a guide for lectures and seminars on a graduate level. The series de Gruyter Studies in Mathematics was founded ca. 35 years ago by the late Professor Heinz Bauer and Professor Peter Gabriel with the aim to establish a series of monographs and textbooks of high standard, written by scholars with an international reputation presenting current fields of research in pure and applied mathematics. While the editorial board of the Studies has changed with the years, the aspirations of the Studies are unchanged. In times of rapid growth of mathematical knowledge carefully written monographs and textbooks written by experts are needed more than ever, not least to pave the way for the next generation of mathematicians. In this sense the editorial board and the publisher of the Studies are devoted to continue the Studies as a service to the mathematical community. Please submit any book proposals to Niels Jacob. Titles in planning include Flavia Smarazzo and Alberto Tesei, Measure Theory: Radon Measures, Young Measures, and Applications to Parabolic Problems (2019) Elena Cordero and Luigi Rodino, Time-Frequency Analysis of Operators (2019) Mark M. Meerschaert, Alla Sikorskii, and Mohsen Zayernouri, Stochastic and Computational Models for Fractional Calculus, second edition (2020) Mariusz Lemanczyk, Ergodic Theory: Spectral Theory, Joinings, and Their Applications (2020) Marco Abate, Holomorphic Dynamics on Hyperbolic Complex Manifolds (2021) Miroslava Antic, Joeri Van der Veken, and Luc Vrancken, Differential Geometry of Submanifolds: Submanifolds of Almost Complex Spaces and Almost Product Spaces (2021) Kai Liu, Ilpo Laine, and Lianzhong Yang, Complex Differential-Difference Equations (2021) Rajendra Vasant Gurjar, Kayo Masuda, and Masayoshi Miyanishi, Affine Space Fibrations (2022)
This book develops methods for two key problems in the analysis of large-scale surveys: dealing with incomplete data and making inferences about sparsely represented subdomains. The presentation is committed to two particular methods, multiple imputation for missing data and multivariate composition for small-area estimation. The methods are presented as developments of established approaches by attending to their deficiencies. Thus the change to more efficient methods can be gradual, sensitive to the management priorities in large research organisations and multidisciplinary teams and to other reasons for inertia. The typical setting of each problem is addressed first, and then the constituency of the applications is widened to reinforce the view that the general method is essential for modern survey analysis. The general tone of the book is not "from theory to practice," but "from current practice to better practice." The third part of the book, a single chapter, presents a method for efficient estimation under model uncertainty. It is inspired by the solution for small-area estimation and is an example of "from good practice to better theory." A strength of the presentation is chapters of case studies, one for each problem. Whenever possible, turning to examples and illustrations is preferred to the theoretical argument. The book is suitable for graduate students and researchers who are acquainted with the fundamentals of sampling theory and have a good grounding in statistical computing, or in conjunction with an intensive period of learning and establishing one's own a modern computing and graphical environment that would serve the reader for most of the analytical work inthe future. While some analysts might regard data imperfections and deficiencies, such as nonresponse and limited sample size, as someone else's failure that bars effective and valid analysis, this book presents them as respectable analytical and inferential challenges, opportunities to harness the computing power into service of high-quality socially relevant statistics. Overriding in this approach is the general principlea "to do the best, for the consumer of statistical information, that can be done with what is available. The reputation that government statistics is a rigid procedure-based and operation-centred activity, distant from the mainstream of statistical theory and practice, is refuted most resolutely. After leaving De Montfort University in 2004 where he was a Senior Research Fellow in Statistics, Nick Longford founded the statistical research and consulting company SNTL in Leicester, England. He was awarded the first Campion Fellowship (2000a "02) for methodological research in United Kingdom government statistics. He has served as Associate Editor of the Journal of the Royal Statistical Society, Series A, and the Journal of Educational and Behavioral Statistics and as an Editor of the Journal of Multivariate Analysis. He is a member of the Editorial Board of the British Journal of Mathematical and Statistical Psychology. He is the author of two other monographs, Random Coefficient Models (Oxford University Press, 1993) and Models for Uncertainty in Educational Testing (Springer-Verlag, 1995). From the reviews: "Ultimately, this book serves as an excellent reference source to guide and improve statistical practice in survey settings exhibiting theseproblems." Psychometrika "I am convinced this book will be useful to practitioners...[and a] valuable resource for future research in this field." Jan Kordos in Statistics in Transition, Vol. 7, No. 5, June 2006 "To sum up, I think this is an excellent book and it thoroughly covers methods to deal with incomplete data problems and small-area estimation. It is a useful and suitable book for survey statisticians, as well as for researchers and graduate students interested on sampling designs." Ramon Cleries Soler in Statistics and Operations Research Transactions, Vol. 30, No. 1, January-June 2006
Designed to offer an accessible set of case studies and analyses of ethical dilemmas in data science. This book will be suitable for technical readers in data science who want to understand diverse ethical approaches to AI.
This book introduces data-driven remaining useful life prognosis techniques, and shows how to utilize the condition monitoring data to predict the remaining useful life of stochastic degrading systems and to schedule maintenance and logistics plans. It is also the first book that describes the basic data-driven remaining useful life prognosis theory systematically and in detail. The emphasis of the book is on the stochastic models, methods and applications employed in remaining useful life prognosis. It includes a wealth of degradation monitoring experiment data, practical prognosis methods for remaining useful life in various cases, and a series of applications incorporated into prognostic information in decision-making, such as maintenance-related decisions and ordering spare parts. It also highlights the latest advances in data-driven remaining useful life prognosis techniques, especially in the contexts of adaptive prognosis for linear stochastic degrading systems, nonlinear degradation modeling based prognosis, residual storage life prognosis, and prognostic information-based decision-making.
This is a revised and extended version of the French book. The main changes are in Chapter 1 where the former Section 1. 3 is removed and the rest of the material is substantially revised. Sections 1. 2. 4, 1. 3, 1. 9, and 2. 7. 3 are new. Each chapter now has the bibliographic notes and contains the exercises section. I would like to thank Cristina Butucea, Alexander Goldenshluger, Stephan Huckenmann, Yuri Ingster, Iain Johnstone, Vladimir Koltchinskii, Alexander Korostelev, Oleg Lepski, Karim Lounici, Axel Munk, Boaz Nadler, AlexanderNazin, PhilippeRigollet, AngelikaRohde, andJonWellnerfortheir valuable remarks that helped to improve the text. I am grateful to Centre de Recherche en Economie et Statistique (CREST) and to Isaac Newton Ins- tute for Mathematical Sciences which provided an excellent environment for ?nishing the work on the book. My thanks also go to Vladimir Zaiats for his highly competent translation of the French original into English and to John Kimmel for being a very supportive and patient editor. Alexandre Tsybakov Paris, June 2008 Preface to the French Edition The tradition of considering the problem of statistical estimation as that of estimation of a ?nite number of parameters goes back to Fisher. However, parametric models provide only an approximation, often imprecise, of the - derlying statistical structure. Statistical models that explain the data in a more consistent way are often more complex: Unknown elements in these models are, in general, some functions having certain properties of smoo- ne
This volume contains 30 of David Brillinger's most influential papers. He is an eminent statistical scientist, having published broadly in time series and point process analysis, seismology, neurophysiology, and population biology. Each of these areas are well represented in the book. The volume has been divided into four parts, each with comments by one of Dr. Brillinger's former PhD students. His more theoretical papers have comments by Victor Panaretos from Switzerland. The area of time series has commentary by Pedro Morettin from Brazil. The biologically oriented papers are commented by Tore Schweder from Norway and Haiganoush Preisler from USA, while the point process papers have comments by Peter Guttorp from USA. In addition, the volume contains a Statistical Science interview with Dr. Brillinger, and his bibliography.
The interaction between mathematicians, statisticians and econometricians working in actuarial sciences and finance is producing numerous meaningful scientific results. This volume introduces new ideas, in the form of four-page papers, presented at the international conference Mathematical and Statistical Methods for Actuarial Sciences and Finance (MAF), held at Universidad Carlos III de Madrid (Spain), 4th-6th April 2018. The book covers a wide variety of subjects in actuarial science and financial fields, all discussed in the context of the cooperation between the three quantitative approaches. The topics include: actuarial models; analysis of high frequency financial data; behavioural finance; carbon and green finance; credit risk methods and models; dynamic optimization in finance; financial econometrics; forecasting of dynamical actuarial and financial phenomena; fund performance evaluation; insurance portfolio risk analysis; interest rate models; longevity risk; machine learning and soft-computing in finance; management in insurance business; models and methods for financial time series analysis, models for financial derivatives; multivariate techniques for financial markets analysis; optimization in insurance; pricing; probability in actuarial sciences, insurance and finance; real world finance; risk management; solvency analysis; sovereign risk; static and dynamic portfolio selection and management; trading systems. This book is a valuable resource for academics, PhD students, practitioners, professionals and researchers, and is also of interest to other readers with quantitative background knowledge.
Stochastic models are everywhere. In manufacturing, queuing models are used for modeling production processes, realistic inventory models are stochastic in nature. Stochastic models are considered in transportation and communication. Marketing models use stochastic descriptions of the demands and buyer's behaviors. In finance, market prices and exchange rates are assumed to be certain stochastic processes, and insurance claims appear at random times with random amounts. To each decision problem, a cost function is associated. Costs may be direct or indirect, like loss of time, quality deterioration, loss in production or dissatisfaction of customers. In decision making under uncertainty, the goal is to minimize the expected costs. However, in practically all realistic models, the calculation of the expected costs is impossible due to the model complexity. Simulation is the only practicable way of getting insight into such models. Thus, the problem of optimal decisions can be seen as getting simulation and optimization effectively combined. The field is quite new and yet the number of publications is enormous. This book does not even try to touch all work done in this area. Instead, many concepts are presented and treated with mathematical rigor and necessary conditions for the correctness of various approaches are stated. Optimization of Stochastic Models: The Interface Between Simulation and Optimization is suitable as a text for a graduate level course on Stochastic Models or as a secondary text for a graduate level course in Operations Research.
Most financial and investment decisions are based on considerations of possible future changes and require forecasts on the evolution of the financial world. Time series and processes are the natural tools for describing the dynamic behavior of financial data, leading to the required forecasts. This book presents a survey of the empirical properties of financial time series, their descriptions by means of mathematical processes, and some implications for important financial applications used in many areas like risk evaluation, option pricing or portfolio construction. The statistical tools used to extract information from raw data are introduced. Extensive multiscale empirical statistics provide a solid benchmark of stylized facts (heteroskedasticity, long memory, fat-tails, leverage ), in order to assess various mathematical structures that can capture the observed regularities. The author introduces a broad range of processes and evaluates them systematically against the benchmark, summarizing the successes and limitations of these models from an empirical point of view. The outcome is that only multiscale ARCH processes with long memory, discrete multiplicative structures and non-normal innovations are able to capture correctly the empirical properties. In particular, only a discrete time series framework allows to capture all the stylized facts in a process, whereas the stochastic calculus used in the continuum limit is too constraining. The present volume offers various applications and extensions for this class of processes including high-frequency volatility estimators, market risk evaluation, covariance estimation and multivariate extensions of the processes. The book discusses many practical implications and is addressed to practitioners and quants in the financial industry, as well as to academics, including graduate (Master or PhD level) students. The prerequisites are basic statistics and some elementary financial mathematics."
Psychological Statistics: The Basics walks the reader through the core logic of statistical inference and provides a solid grounding in the techniques necessary to understand modern statistical methods in the psychological and behavioral sciences. This book is designed to be a readable account of the role of statistics in the psychological sciences. Rather than providing a comprehensive reference for statistical methods, Psychological Statistics: The Basics gives the reader an introduction to the core procedures of estimation and model comparison, both of which form the cornerstone of statistical inference in psychology and related fields. Instead of relying on statistical recipes, the book gives the reader the big picture and provides a seamless transition to more advanced methods, including Bayesian model comparison. Psychological Statistics: The Basics not only serves as an excellent primer for beginners but it is also the perfect refresher for graduate students, early career psychologists, or anyone else interested in seeing the big picture of statistical inference. Concise and conversational, its highly readable tone will engage any reader who wants to learn the basics of psychological statistics.
In a stochastic network, such as those in computer/telecommunications and manufacturing, discrete units move among a network of stations where they are processed or served. Randomness may occur in the servicing and routing of units, and there may be queueing for services. This book describes several basic stochastic network processes, beginning with Jackson networks and ending with spatial queueing systems in which units, such as cellular phones, move in a space or region where they are served. The focus is on network processes that have tractable (closed-form) expressions for the equilibrium probability distribution of the numbers of units at the stations. These distributions yield network performance parameters such as expectations of throughputs, delays, costs, and travel times. The book is intended for graduate students and researchers in engineering, science and mathematics interested in the basics of stochastic networks that have been developed over the last twenty years. Assuming a graduate course in stochastic processes without measure theory, the emphasis is on multi-dimensional Markov processes. There is also some self-contained material on point processes involving real analysis. The book also contains rather complete introductions to reversible Markov processes, Palm probabilities for stationary systems, Little laws for queueing systems and space-time Poisson processes. This material is used in describing reversible networks, waiting times at stations, travel times and space-time flows in networks. Richard Serfozo received the Ph.D. degree in Industrial Engineering and Management Sciences at Northwestern University in 1969 and is currently Professor of Industrial and Systems Engineering at Georgia Institute of Technology. Prior to that he held positions in the Boeing Company, Syracuse University, and Bell Laboratories. He has held
This volume presents an eclectic mix of original research articles in areas covering the analysis of ordered data, stochastic modeling and biostatistics. These areas were featured in a conference held at the University of Texas at Dallas from March 7 to 9, 2014 in honor of Professor H. N. Nagaraja's 60th birthday and his distinguished contributions to statistics. The articles were written by leading experts who were invited to contribute to the volume from among the conference participants. The volume is intended for all researchers with an interest in order statistics, distribution theory, analysis of censored data, stochastic modeling, time series analysis, and statistical methods for the health sciences, including statistical genetics.
Sergei Kuznetsov is one of the top experts on measure valued branching processes (also known as "superprocesses") and their connection to nonlinear partial differential operators. His research interests range from stochastic processes and partial differential equations to mathematical statistics, time series analysis and statistical software; he has over 90 papers published in international research journals. His most well known contribution to probability theory is the "Kuznetsov-measure." A conference honoring his 60th birthday has been organized at Boulder, Colorado in the summer of 2010, with the participation of Sergei Kuznetsov's mentor and major co-author, Eugene Dynkin. The conference focused on topics related to superprocesses, branching diffusions and nonlinear partial differential equations. In particular, connections to the so-called "Kuznetsov-measure" were emphasized. Leading experts in the field as well as young researchers contributed to the conference. The meeting was organized by J. Englander and B. Rider (U. of Colorado).
Self-contained chapters on the most important applications and methodologies in finance, which can easily be used for the reader’s research or as a reference for courses on empirical finance. Each chapter is reproducible in the sense that the reader can replicate every single figure, table, or number by simply copy-pasting the code we provide. A full-fledged introduction to machine learning with tidymodels based on tidy principles to show how factor selection and option pricing can benefit from Machine Learning methods. Chapter 2 on accessing & managing financial data shows how to retrieve and prepare the most important datasets in the field of financial economics: CRSP and Compustat. The chapter also contains detailed explanations of the most important data characteristics. Each chapter provides exercises that are based on established lectures and exercise classes and which are designed to help students to dig deeper. The exercises can be used for self-studying or as source of inspiration for teaching exercises.
This book deals with methods to evaluate scientific productivity. In the book statistical methods, deterministic and stochastic models and numerous indexes are discussed that will help the reader to understand the nonlinear science dynamics and to be able to develop or construct systems for appropriate evaluation of research productivity and management of research groups and organizations. The dynamics of science structures and systems is complex, and the evaluation of research productivity requires a combination of qualitative and quantitative methods and measures. The book has three parts. The first part is devoted to mathematical models describing the importance of science for economic growth and systems for the evaluation of research organizations of different size. The second part contains descriptions and discussions of numerous indexes for the evaluation of the productivity of researchers and groups of researchers of different size (up to the comparison of research productivities of research communities of nations). Part three contains discussions of non-Gaussian laws connected to scientific productivity and presents various deterministic and stochastic models of science dynamics and research productivity. The book shows that many famous fat tail distributions as well as many deterministic and stochastic models and processes, which are well known from physics, theory of extreme events or population dynamics, occur also in the description of dynamics of scientific systems and in the description of the characteristics of research productivity. This is not a surprise as scientific systems are nonlinear, open and dissipative.
This book explains harmonisation techniques that can be used in survey research to align national systems of categories and definitions in such a way that comparison is possible across countries and cultures. It provides an introduction to instruments for collecting internationally comparable data of interest to survey researchers. It shows how seven key demographic and socio-economic variables can be harmonised and employed in European comparative surveys. The seven key variables discussed in detail are: education, occupation, income, activity status, private household, ethnicity, and family. These demographic and socio-economic variables are background variables that no survey can do without. They frequently have the greatest explanatory capacity to analyse social structures, and are a mirror image of the way societies are organised nationally. This becomes readily apparent when one attempts, for example, to compare national education systems. Moreover, a comparison of the national definitions of concepts such as "private household" reveals several different historically and culturally shaped underlying concepts. Indeed, some European countries do not even have a word for "private household". Hence such national definitions and categories cannot simply be translated from one culture to another. They must be harmonised. |
You may like...
Pearson Edexcel International A Level…
Joe Skrakowski, Harry Smith
Paperback
R863
Discovery Miles 8 630
Investigations in the Military and…
Benjamin Apthorp Gould
Hardcover
Statistical Methods and Calculation…
Isabel Willemse, Peter Nyelisani
Paperback
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
Statistics for Management and Economics
Gerald Keller, Nicoleta Gaciu
Paperback
Mathematical Statistics with…
William Mendenhall, Dennis Wackerly, …
Paperback
Pearson Edexcel International A Level…
Joe Skrakowski, Harry Smith
Paperback
R875
Discovery Miles 8 750
|