![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
A comprehensive guide to everything scientists need to know about data management, this book is essential for researchers who need to learn how to organize, document and take care of their own data. Researchers in all disciplines are faced with the challenge of managing the growing amounts of digital data that are the foundation of their research. Kristin Briney offers practical advice and clearly explains policies and principles, in an accessible and in-depth text that will allow researchers to understand and achieve the goal of better research data management. Data Management for Researchers includes sections on: * The data problem - an introduction to the growing importance and challenges of using digital data in research. Covers both the inherent problems with managing digital information, as well as how the research landscape is changing to give more value to research datasets and code. * The data lifecycle - a framework for data's place within the research process and how data's role is changing. Greater emphasis on data sharing and data reuse will not only change the way we conduct research but also how we manage research data. * Planning for data management - covers the many aspects of data management and how to put them together in a data management plan. This section also includes sample data management plans. * Documenting your data - an often overlooked part of the data management process, but one that is critical to good management; data without documentation are frequently unusable. * Organizing your data - explains how to keep your data in order using organizational systems and file naming conventions. This section also covers using a database to organize and analyze content. * Improving data analysis - covers managing information through the analysis process. This section starts by comparing the management of raw and analyzed data and then describes ways to make analysis easier, such as spreadsheet best practices. It also examines practices for research code, including version control systems. * Managing secure and private data - many researchers are dealing with data that require extra security. This section outlines what data falls into this category and some of the policies that apply, before addressing the best practices for keeping data secure. * Short-term storage - deals with the practical matters of storage and backup and covers the many options available. This section also goes through the best practices to insure that data are not lost. * Preserving and archiving your data - digital data can have a long life if properly cared for. This section covers managing data in the long term including choosing good file formats and media, as well as determining who will manage the data after the end of the project. * Sharing/publishing your data - addresses how to make data sharing across research groups easier, as well as how and why to publicly share data. This section covers intellectual property and licenses for datasets, before ending with the altmetrics that measure the impact of publicly shared data. * Reusing data - as more data are shared, it becomes possible to use outside data in your research. This chapter discusses strategies for finding datasets and lays out how to cite data once you have found it. This book is designed for active scientific researchers but it is useful for anyone who wants to get more from their data: academics, educators, professionals or anyone who teaches data management, sharing and preservation. "An excellent practical treatise on the art and practice of data management, this book is essential to any researcher, regardless of subject or discipline." -Robert Buntrock, Chemical Information Bulletin
Peter Goos, Department of Statistics, University of Leuven, Faculty of Bio-Science Engineering and University of Antwerp, Faculty of Applied Economics, Belgium David Meintrup, Department of Mathematics and Statistics, University of Applied Sciences Ingolstadt, Faculty of Mechanical Engineering, Germany Thorough presentation of introductory statistics and probability theory, with numerous examples and applications using JMP JMP: Graphs, Descriptive Statistics and Probability provides an accessible and thorough overview of the most important descriptive statistics for nominal, ordinal and quantitative data with particular attention to graphical representations. The authors distinguish their approach from many modern textbooks on descriptive statistics and probability theory by offering a combination of theoretical and mathematical depth, and clear and detailed explanations of concepts. Throughout the book, the user-friendly, interactive statistical software package JMP is used for calculations, the computation of probabilities and the creation of figures. The examples are explained in detail, and accompanied by step-by-step instructions and screenshots. The reader will therefore develop an understanding of both the statistical theory and its applications. Traditional graphs such as needle charts, histograms and pie charts are included, as well as the more modern mosaic plots, bubble plots and heat maps. The authors discuss probability theory, particularly discrete probability distributions and continuous probability densities, including the binomial and Poisson distributions, and the exponential, normal and lognormal densities. They use numerous examples throughout to illustrate these distributions and densities. Key features: * Introduces each concept with practical examples and demonstrations in JMP. * Provides the statistical theory including detailed mathematical derivations. * Presents illustrative examples in each chapter accompanied by step-by-step instructions and screenshots to help develop the reader s understanding of both the statistical theory and its applications. * A supporting website with data sets and other teaching materials. This book is equally aimed at students in engineering, economics and natural sciences who take classes in statistics as well as at masters/advanced students in applied statistics and probability theory. For teachers of applied statistics, this book provides a rich resource of course material, examples and applications.
Congruences are ubiquitous in computer science, engineering, mathematics, and related areas. Developing techniques for finding (the number of) solutions of congruences is an important problem. But there are many scenarios in which we are interested in only a subset of the solutions; in other words, there are some restrictions. What do we know about these restricted congruences, their solutions, and applications? This book introduces the tools that are needed when working on restricted congruences and then systematically studies a variety of restricted congruences. Restricted Congruences in Computing defines several types of restricted congruence, obtains explicit formulae for the number of their solutions using a wide range of tools and techniques, and discusses their applications in cryptography, information security, information theory, coding theory, string theory, quantum field theory, parallel computing, artificial intelligence, computational biology, discrete mathematics, number theory, and more. This is the first book devoted to restricted congruences and their applications. It will be of interest to graduate students and researchers across computer science, electrical engineering, and mathematics.
This book takes a fresh look at the popular and well-established method of maximum likelihood for statistical estimation and inference. It begins with an intuitive introduction to the concepts and background of likelihood, and moves through to the latest developments in maximum likelihood methodology, including general latent variable models and new material for the practical implementation of integrated likelihood using the free ADMB software. Fundamental issues of statistical inference are also examined, with a presentation of some of the philosophical debates underlying the choice of statistical paradigm. Key features: * Provides an accessible introduction to pragmatic maximum likelihood modelling. * Covers more advanced topics, including general forms of latent variable models (including non-linear and non-normal mixed-effects and state-space models) and the use of maximum likelihood variants, such as estimating equations, conditional likelihood, restricted likelihood and integrated likelihood. * Adopts a practical approach, with a focus on providing the relevant tools required by researchers and practitioners who collect and analyze real data. * Presents numerous examples and case studies across a wide range of applications including medicine, biology and ecology. * Features applications from a range of disciplines, with implementation in R, SAS and/or ADMB. * Provides all program code and software extensions on a supporting website. * Confines supporting theory to the final chapters to maintain a readable and pragmatic focus of the preceding chapters. This book is not just an accessible and practical text about maximum likelihood, it is a comprehensive guide to modern maximum likelihood estimation and inference. It will be of interest to readers of all levels, from novice to expert. It will be of great benefit to researchers, and to students of statistics from senior undergraduate to graduate level. For use as a course text, exercises are provided at the end of each chapter.
Economic theories can be expressed in words, numbers, graphs and symbols. The existing traditional economics textbooks cover all four methods, but the general focus is often more on writing about the theory and methods, with few practical examples. With an increasing number of universities having introduced mathematical economics at undergraduate level, Basic mathematics for economic students aims to fill this gap in the field. Basic mathematics for economic students begins with a comprehensive chapter on basic mathematical concepts and methods (suitable for self-study, revision or tutorial purposes) to ensure that students have the necessary foundation. The book is written in an accessible style and is extremely practical. Numerous mathematical economics examples and exercises are provided as well as fully worked solutions using numbers, graphs and symbols. Basic mathematics for economic students is aimed at all economics students. It focuses on quantitative aspects and especially complements the two highly popular theoretical economics textbooks Understanding microeconomics and Understanding macroeconomics, both written by Philip Mohr and published by Van Schaik.
This monograph presents mathematical theory of statistical models
described by the essentially large number of unknown parameters,
comparable with sample size but can also be much larger. In this
meaning, the proposed theory can be called "essentially
multiparametric." It is developed on the basis of the Kolmogorov
asymptotic approach in which sample size increases along with the
number of unknown parameters.
An introduction to the mathematical theory and financial models developed and used on Wall Street Providing both a theoretical and practical approach to the underlying mathematical theory behind financial models, Measure, Probability, and Mathematical Finance: A Problem-Oriented Approach presents important concepts and results in measure theory, probability theory, stochastic processes, and stochastic calculus. Measure theory is indispensable to the rigorous development of probability theory and is also necessary to properly address martingale measures, the change of numeraire theory, and LIBOR market models. In addition, probability theory is presented to facilitate the development of stochastic processes, including martingales and Brownian motions, while stochastic processes and stochastic calculus are discussed to model asset prices and develop derivative pricing models. The authors promote a problem-solving approach when applying mathematics in real-world situations, and readers are encouraged to address theorems and problems with mathematical rigor. In addition, Measure, Probability, and Mathematical Finance features: * A comprehensive list of concepts and theorems from measure theory, probability theory, stochastic processes, and stochastic calculus * Over 500 problems with hints and select solutions to reinforce basic concepts and important theorems * Classic derivative pricing models in mathematical finance that have been developed and published since the seminal work of Black and Scholes Measure, Probability, and Mathematical Finance: A Problem-Oriented Approach is an ideal textbook for introductory quantitative courses in business, economics, and mathematical finance at the upper-undergraduate and graduate levels. The book is also a useful reference for readers who need to build their mathematical skills in order to better understand the mathematical theory of derivative pricing models.
Now updated in a valuable new edition--this user-friendly book focuses on understanding the "why" of mathematical statistics Probability and Statistical Inference, Second Edition introduces key probability and statis-tical concepts through non-trivial, real-world examples and promotes the developmentof intuition rather than simple application. With its coverage of the recent advancements in computer-intensive methods, this update successfully provides the comp-rehensive tools needed to develop a broad understanding of the theory of statisticsand its probabilistic foundations. This outstanding new edition continues to encouragereaders to recognize and fully understand the why, not just the how, behind the concepts, theorems, and methods of statistics. Clear explanations are presented and appliedto various examples that help to impart a deeper understanding of theorems and methods--from fundamental statistical concepts to computational details. Additional features of this Second Edition include: A new chapter on random samples Coverage of computer-intensive techniques in statistical inference featuring Monte Carlo and resampling methods, such as bootstrap and permutation tests, bootstrap confidence intervals with supporting R codes, and additional examples available via the book's FTP site Treatment of survival and hazard function, methods of obtaining estimators, and Bayes estimating Real-world examples that illuminate presented concepts Exercises at the end of each section Providing a straightforward, contemporary approach to modern-day statistical applications, Probability and Statistical Inference, Second Edition is an ideal text for advanced undergraduate- and graduate-level courses in probability and statistical inference. It also serves as a valuable reference for practitioners in any discipline who wish to gain further insight into the latest statistical tools.
This book presents an introduction to linear univariate and multivariate time series analysis, providing brief theoretical insights into each topic, and from the beginning illustrating the theory with software examples. As such, it quickly introduces readers to the peculiarities of each subject from both theoretical and the practical points of view. It also includes numerous examples and real-world applications that demonstrate how to handle different types of time series data. The associated software package, SSMMATLAB, is written in MATLAB and also runs on the free OCTAVE platform. The book focuses on linear time series models using a state space approach, with the Kalman filter and smoother as the main tools for model estimation, prediction and signal extraction. A chapter on state space models describes these tools and provides examples of their use with general state space models. Other topics discussed in the book include ARIMA; and transfer function and structural models; as well as signal extraction using the canonical decomposition in the univariate case, and VAR, VARMA, cointegrated VARMA, VARX, VARMAX, and multivariate structural models in the multivariate case. It also addresses spectral analysis, the use of fixed filters in a model-based approach, and automatic model identification procedures for ARIMA and transfer function models in the presence of outliers, interventions, complex seasonal patterns and other effects like Easter, trading day, etc. This book is intended for both students and researchers in various fields dealing with time series. The software provides numerous automatic procedures to handle common practical situations, but at the same time, readers with programming skills can write their own programs to deal with specific problems. Although the theoretical introduction to each topic is kept to a minimum, readers can consult the companion book 'Multivariate Time Series With Linear State Space Structure', by the same author, if they require more details.
This book shows how to decompose high-dimensional microarrays into small subspaces (Small Matryoshkas, SMs), statistically analyze them, and perform cancer gene diagnosis. The information is useful for genetic experts, anyone who analyzes genetic data, and students to use as practical textbooks.Discriminant analysis is the best approach for microarray consisting of normal and cancer classes. Microarrays are linearly separable data (LSD, Fact 3). However, because most linear discriminant function (LDF) cannot discriminate LSD theoretically and error rates are high, no one had discovered Fact 3 until now. Hard-margin SVM (H-SVM) and Revised IP-OLDF (RIP) can find Fact3 easily. LSD has the Matryoshka structure and is easily decomposed into many SMs (Fact 4). Because all SMs are small samples and LSD, statistical methods analyze SMs easily. However, useful results cannot be obtained. On the other hand, H-SVM and RIP can discriminate two classes in SM entirely. RatioSV is the ratio of SV distance and discriminant range. The maximum RatioSVs of six microarrays is over 11.67%. This fact shows that SV separates two classes by window width (11.67%). Such easy discrimination has been unresolved since 1970. The reason is revealed by facts presented here, so this book can be read and enjoyed like a mystery novel. Many studies point out that it is difficult to separate signal and noise in a high-dimensional gene space. However, the definition of the signal is not clear. Convincing evidence is presented that LSD is a signal. Statistical analysis of the genes contained in the SM cannot provide useful information, but it shows that the discriminant score (DS) discriminated by RIP or H-SVM is easily LSD. For example, the Alon microarray has 2,000 genes which can be divided into 66 SMs. If 66 DSs are used as variables, the result is a 66-dimensional data. These signal data can be analyzed to find malignancy indicators by principal component analysis and cluster analysis.
This handy supplement shows students how to come to the answers
shown in the back of the text. It includes solutions to all of the
odd numbered exercises.
The book contains a detailed treatment of thermodynamic formalism on general compact metrizable spaces. Topological pressure, topological entropy, variational principle, and equilibrium states are presented in detail. Abstract ergodic theory is also given a significant attention. Ergodic theorems, ergodicity, and Kolmogorov-Sinai metric entropy are fully explored. Furthermore, the book gives the reader an opportunity to find rigorous presentation of thermodynamic formalism for distance expanding maps and, in particular, subshifts of finite type over a finite alphabet. It also provides a fairly complete treatment of subshifts of finite type over a countable alphabet. Transfer operators, Gibbs states and equilibrium states are, in this context, introduced and dealt with. Their relations are explored. All of this is applied to fractal geometry centered around various versions of Bowen's formula in the context of expanding conformal repellors, limit sets of conformal iterated function systems and conformal graph directed Markov systems. A unique introduction to iteration of rational functions is given with emphasize on various phenomena caused by rationally indifferent periodic points. Also, a fairly full account of the classicaltheory of Shub's expanding endomorphisms is given; it does not have a book presentation in English language mathematical literature.
This volume describes how to develop Bayesian thinking, modelling and computation both from philosophical, methodological and application point of view. It further describes parametric and nonparametric Bayesian methods for modelling and how to use modern computational methods to summarize inferences using simulation. The book covers wide range of topics including objective and subjective Bayesian inferences with a variety of applications in modelling categorical, survival, spatial, spatiotemporal, Epidemiological, software reliability, small area and micro array data. The book concludes with a chapter on how to teach Bayesian thoughts to nonstatisticians.
This book provides a unique and balanced approach to probability, statistics, and stochastic processes. Readers gain a solid foundation in all three fields that serves as a stepping stone to more advanced investigations into each area. The Second Edition features new coverage of analysis of variance (ANOVA), consistency and efficiency of estimators, asymptotic theory for maximum likelihood estimators, empirical distribution function and the Kolmogorov-Smirnov test, general linear models, multiple comparisons, Markov chain Monte Carlo (MCMC), Brownian motion, martingales, and renewal theory. Many new introductory problems and exercises have also been added. This book combines a rigorous, calculus-based development of theory with a more intuitive approach that appeals to readers' sense of reason and logic, an approach developed through the author's many years of classroom experience. The book begins with three chapters that develop probability theory and introduce the axioms of probability, random variables, and joint distributions. The next two chapters introduce limit theorems and simulation. Also included is a chapter on statistical inference with a focus on Bayesian statistics, which is an important, though often neglected, topic for undergraduate-level texts. Markov chains in discrete and continuous time are also discussed within the book. More than 400 examples are interspersed throughout to help illustrate concepts and theory and to assist readers in developing an intuitive sense of the subject. Readers will find many of the examples to be both entertaining and thought provoking. This is also true for the carefully selected problems that appear at the end of each chapter.
A volume in Quantitative Methods in Education and the Behavioral Sciences: Issues, Research, and Teaching Series Editor Ron Serlin, University of Wisconsin (sponsored by the Educational Statisticians, SIG) Multilevel Modeling of Educational Data, co-edited by Ann A. O'Connell, Ed.D., and D. Betsy McCoach, Ph.D., is the next volume in the series: Quantitative Methods in Education and the Behavioral Sciences: Issues, Research and Teaching (Information Age Publishing), sponsored by the Educational Statisticians' Special Interest Group (Ed-Stat SIG) of the American Educational Research Association. The use of multilevel analyses to examine effects of groups or contexts on individual outcomes has burgeoned over the past few decades. Multilevel modeling techniques allow educational researchers to more appropriately model data that occur within multiple hierarchies (i.e.- the classroom, the school, and/or the district). Examples of multilevel research problems involving schools include establishing trajectories of academic achievement for children within diverse classrooms or schools or studying school-level characteristics on the incidence of bullying. Multilevel models provide an improvement over traditional single-level approaches to working with clustered or hierarchical data; however, multilevel data present complex and interesting methodological challenges for the applied education research community. In keeping with the pedagogical focus for this book series, the papers this volume emphasize applications of multilevel models using educational data, with chapter topics ranging from basic to advanced. This book represents a comprehensive and instructional resource text on multilevel modeling for quantitative researchers who plan to use multilevel techniques in their work, as well as for professors and students of quantitative methods courses focusing on multilevel analysis. Through the contributions of experienced researchers and teachers of multilevel modeling, this volume provides an accessible and practical treatment of methods appropriate for use in a first and/or second course in multilevel analysis. A supporting website links chapter examples to actual data, creating an opportunity for readers to reinforce their knowledge through hands-on data analysis. This book serves as a guide for designing multilevel studies and applying multilevel modeling techniques in educational and behavioral research, thus contributing to a better understanding of and solution for the challenges posed by multilevel systems and data.
This is an introductory statistics book designed to provide scientists with practical information needed to apply the most common statistical tests to laboratory research data. The book is designed to be practical and applicable, so only minimal information is devoted to theory or equations. Emphasis is placed on the underlying principles for effective data analysis and survey the statistical tests. It is of special value for scientists who have access to Minitab software. Examples are provides for all the statistical tests and explanation of the interpretation of these results presented with Minitab (similar to results for any common software package). The book is specifically designed to contribute to the AAPS series on advances in the pharmaceutical sciences. It benefits professional scientists or graduate students who have not had a formal statistics class, who had bad experiences in such classes, or who just fear/don't understand statistics. Chapter 1 focuses on terminology and essential elements of statistical testing. Statistics is often complicated by synonyms and this chapter established the terms used in the book and how rudiments interact to create statistical tests. Chapter 2 discussed descriptive statistics that are used to organize and summarize sample results. Chapter 3 discussed basic assumptions of probability, characteristics of a normal distribution, alternative approaches for non-normal distributions and introduces the topic of making inferences about a larger population based on a small sample from that population. Chapter 4 discussed hypothesis testing where computer output is interpreted and decisions are made regarding statistical significance. This chapter also deasl with the determination of appropriate sample sizes. The next three chapters focus on tests that make decisions about a population base on a small subset of information. Chapter 5 looks at statistical tests that evaluate where a significant difference exists. In Chapter 6 the tests try to determine the extent and importance of relationships. In contrast to fifth chapter, Chapter 7 presents tests that evaluate the equivalence, not the difference between levels being tested. The last chapter deals with potential outlier or aberrant values and how to statistically determine if they should be removed from the sample data. Each statistical test presented includes an example problem with the resultant software output and how to interpret the results. Minimal time is spent on the mathematical calculations or theory. For those interested in the associated equations, supplemental figures are presented for each test with respective formulas. In addition, Appendix D presents the equations and proof for every output result for the various examples. Examples and results from the appropriate statistical results are displayed using Minitab 18O. In addition to the results, the required steps to analyze data using Minitab are presented with the examples for those having access to this software. Numerous other software packages are available, including based data analysis with Excel.
This undergraduate text distils the wisdom of an experienced
teacher and yields, to the mutual advantage of students and their
instructors, a sound and stimulating introduction to probability
theory. The accent is on its essential role in statistical theory
and practice, built on the use of illustrative examples and the
solution of problems from typical examination papers.
Mathematically-friendly for first and second year undergraduate
students, the book is also a reference source for workers in a wide
range of disciplines who are aware that even the simpler aspects of
probability theory are not simple.
Providing a practical introduction to state space methods as
applied to unobserved components time series models, also known as
structural time series models, this book introduces time series
analysis using state space methodology to readers who are neither
familiar with time series analysis, nor with state space methods.
The only background required in order to understand the material
presented in the book is a basic knowledge of classical linear
regression models, of which brief review is provided to refresh the
reader's knowledge. Also, a few sections assume familiarity with
matrix algebra, however, these sections may be skipped without
losing the flow of the exposition.
This book is specially designed to refresh and elevate the level of understanding of the foundational background in probability and distributional theory required to be successful in a graduate-level statistics program. Advanced undergraduate students and introductory graduate students from a variety of quantitative backgrounds will benefit from the transitional bridge that this volume offers, from a more generalized study of undergraduate mathematics and statistics to the career-focused, applied education at the graduate level. In particular, it focuses on growing fields that will be of potential interest to future M.S. and Ph.D. students, as well as advanced undergraduates heading directly into the workplace: data analytics, statistics and biostatistics, and related areas.
New Edition of a Classic Guide to Statistical Applications in the Biomedical Sciences In the last decade, there have been significant changes in the way statistics is incorporated into biostatistical, medical, and public health research. Addressing the need for a modernized treatment of these statistical applications, Basic Statistics, Fourth Edition presents relevant, up-to-date coverage of research methodology using careful explanations of basic statistics and how they are used to address practical problems that arise in the medical and public health settings. Through concise and easy-to-follow presentations, readers will learn to interpret and examine data by applying common statistical tools, such as sampling, random assignment, and survival analysis. Continuing the tradition of its predecessor, this new edition outlines a thorough discussion of different kinds of studies and guides readers through the important, related decision-making processes such as determining what information is needed and planning the collections process. The book equips readers with the knowledge to carry out these practices by explaining the various types of studies that are commonly conducted in the fields of medical and public health, and how the level of evidence varies depending on the area of research. Data screening and data entry into statistical programs is explained and accompanied by illustrations of statistical analyses and graphs. Additional features of the Fourth Edition include: A new chapter on data collection that outlines the initial steps in planning biomedical and public health studiesA new chapter on nonparametric statistics that includes a discussion and application of the Sign test, the Wilcoxon Signed Rank test, and the Wilcoxon Rank Sum test and its relationship to the Mann-Whitney U testAn updated introduction to survival analysis that includes the Kaplan Meier method for graphing the survival function and a brief introduction to tests for comparing survival functionsIncorporation of modern statistical software, such as SAS, Stata, SPSS, and Minitab into the presented discussion of data analysisUpdated references at the end of each chapter "Basic Statistics," Fourth Edition is an ideal book for courses on biostatistics, medicine, and public health at the upper-undergraduate and graduate levels. It is also appropriate as a reference for researchers and practitioners who would like to refresh their fundamental understanding of statistical techniques.
Starting with the basic linear model where the design and covariance matrices are of full rank, this book demonstrates how the same statistical ideas can be used to explore the more general linear model with rank-deficient design and/or covariance matrices. The unified treatment presented here provides a clearer understanding of the general linear model from a statistical perspective, thus avoiding the complex matrix-algebraic arguments that are often used in the rank-deficient case. Elegant geometric arguments are used as needed.The book has a very broad coverage, from illustrative practical examples in Regression and Analysis of Variance alongside their implementation using R, to providing comprehensive theory of the general linear model with 181 worked-out examples, 227 exercises with solutions, 152 exercises without solutions (so that they may be used as assignments in a course), and 320 up-to-date references.This completely updated and new edition of Linear Models: An Integrated Approach includes the following features:
|
You may like...
Robust Optimization of Spline Models and…
Ayse OEzmen
Hardcover
Die Braambos Bly Brand - Nie-teoloë Se…
Pieter Malan, Chris Jones
Paperback
Intro to Python for Computer Science and…
Paul Deitel
Paperback
|