![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
The contributions in this book state the complementary rather than competitive relationship between Probability and Fuzzy Set Theory and allow solutions to real life problems with suitable combinations of both theories.
In recent years portfolio optimization and construction methodologies have become an increasingly critical ingredient of asset and fund management, while at the same time portfolio risk assessment has become an essential ingredient in risk management. This trend will only accelerate in the coming years. This practical handbook fills the gap between current university instruction and current industry practice. It provides a comprehensive computationally-oriented treatment of modern portfolio optimization and construction methods using the powerful NUOPT for S-PLUS optimizer.
Recent advances in the understanding of star formation and evolution have been impressive and aspects of that knowledge are explored in this volume. The black hole stellar endpoints are studied and geodesic motion is explored. The emission of gravitational waves is featured due to their very recent experimental discovery.The second aspect of the text is space exploration which began 62 years ago with the Sputnik Earth satellite followed by the landing on the Moon just 50 years ago. Since then Mars has been explored remotely as well as flybys of the outer planets and probes which have escaped the solar system. The text explores many aspects of rocket travel. Finally possibilities for interstellar travel are discussed.All these topics are treated in a unified way using the Matlab App to combine text, figures, formulae and numeric input and output. In this way the reader may vary parameters and see the results in real time. That experience aids in building up an intuitive feel for the many specific problems given in this text.
An authoritative, up-to-date graduate textbook on machine learning that highlights its historical context and societal impacts Patterns, Predictions, and Actions introduces graduate students to the essentials of machine learning while offering invaluable perspective on its history and social implications. Beginning with the foundations of decision making, Moritz Hardt and Benjamin Recht explain how representation, optimization, and generalization are the constituents of supervised learning. They go on to provide self-contained discussions of causality, the practice of causal inference, sequential decision making, and reinforcement learning, equipping readers with the concepts and tools they need to assess the consequences that may arise from acting on statistical decisions. Provides a modern introduction to machine learning, showing how data patterns support predictions and consequential actions Pays special attention to societal impacts and fairness in decision making Traces the development of machine learning from its origins to today Features a novel chapter on machine learning benchmarks and datasets Invites readers from all backgrounds, requiring some experience with probability, calculus, and linear algebra An essential textbook for students and a guide for researchers
While theoretical statistics relies primarily on mathematics and hypothetical situations, statistical practice is a translation of a question formulated by a researcher into a series of variables linked by a statistical tool. As with written material, there are almost always differences between the meaning of the original text and translated text. Additionally, many versions can be suggested, each with their advantages and disadvantages. Analysis of Questionnaire Data with R translates certain classic research questions into statistical formulations. As indicated in the title, the syntax of these statistical formulations is based on the well-known R language, chosen for its popularity, simplicity, and power of its structure. Although syntax is vital, understanding the semantics is the real challenge of any good translation. In this book, the semantics of theoretical-to-practical translation emerges progressively from examples and experience, and occasionally from mathematical considerations. Sometimes the interpretation of a result is not clear, and there is no statistical tool really suited to the question at hand. Sometimes data sets contain errors, inconsistencies between answers, or missing data. More often, available statistical tools are not formally appropriate for the given situation, making it difficult to assess to what extent this slight inadequacy affects the interpretation of results. Analysis of Questionnaire Data with R tackles these and other common challenges in the practice of statistics.
Practical Statistical Methods: A SAS Programming Approach presents a broad spectrum of statistical methods useful for researchers without an extensive statistical background. In addition to nonparametric methods, it covers methods for discrete and continuous data. Omitting mathematical details and complicated formulae, the text provides SAS programs to carry out the necessary analyses and draw appropriate inferences for common statistical problems. After introducing fundamental statistical concepts, the author describes methods used for quantitative data and continuous data following normal and nonnormal distributions. She then focuses on regression methodology, highlighting simple linear regression, logistic regression, and the proportional hazards model. The final chapter briefly discusses such miscellaneous topics as propensity scores, misclassification errors, interim analysis, conditional power, bootstrap, and jackknife. With SAS code and output integrated throughout, this book shows how to interpret data using SAS and illustrates the many statistical methods available for tackling problems in a range of fields, including the pharmaceutical industry and the social sciences.
This book reviews some of today's more complex problems, and reflects some of the important research directions in the field. Twenty-nine authors - largely from Montreal's GERAD Multi-University Research Center and who work in areas of theoretical statistics, applied statistics, probability theory, and stochastic processes - present survey chapters on various theoretical and applied problems of importance and interest to researchers and students across a number of academic domains.
This book provides a quick access to computational tools for algebraic geometry, the mathematical discipline which handles solution sets of polynomial equations. Originating from a number of intense one week schools taught by the authors, the text is designed so as to provide a step by step introduction which enables the reader to get started with his own computational experiments right away. The authors present the basic concepts and ideas in a compact way.
This open access book provides insights into the novel Locally Refined B-spline (LR B-spline) surface format, which is suited for representing terrain and seabed data in a compact way. It provides an alternative to the well know raster and triangulated surface representations. An LR B-spline surface has an overall smooth behavior and allows the modeling of local details with only a limited growth in data volume. In regions where many data points belong to the same smooth area, LR B-splines allow a very lean representation of the shape by locally adapting the resolution of the spline space to the size and local shape variations of the region. The iterative method can be modified to improve the accuracy in particular domains of a point cloud. The use of statistical information criterion can help determining the optimal threshold, the number of iterations to perform as well as some parameters of the underlying mathematical functions (degree of the splines, parameter representation). The resulting surfaces are well suited for analysis and computing secondary information such as contour curves and minimum and maximum points. Also deformation analysis are potential applications of fitting point clouds with LR B-splines.
Clustering is one of the most fundamental and essential data analysis techniques. Clustering can be used as an independent data mining task to discern intrinsic characteristics of data, or as a preprocessing step with the clustering results then used for classification, correlation analysis, or anomaly detection. Kogan and his co-editors have put together recent advances in clustering large and high-dimension data. Their volume addresses new topics and methods which are central to modern data analysis, with particular emphasis on linear algebra tools, opimization methods and statistical techniques. The contributions, written by leading researchers from both academia and industry, cover theoretical basics as well as application and evaluation of algorithms, and thus provide an excellent state-of-the-art overview. The level of detail, the breadth of coverage, and the comprehensive bibliography make this book a perfect fit for researchers and graduate students in data mining and in many other important related application areas.
This innovative textbook presents material for a course on modern statistics that incorporates Python as a pedagogical and practical resource. Drawing on many years of teaching and conducting research in various applied and industrial settings, the authors have carefully tailored the text to provide an ideal balance of theory and practical applications. Numerous examples and case studies are incorporated throughout, and comprehensive Python applications are illustrated in detail. A custom Python package is available for download, allowing students to reproduce these examples and explore others. The first chapters of the text focus on analyzing variability, probability models, and distribution functions. Next, the authors introduce statistical inference and bootstrapping, and variability in several dimensions and regression models. The text then goes on to cover sampling for estimation of finite population quantities and time series analysis and prediction, concluding with two chapters on modern data analytic methods. Each chapter includes exercises, data sets, and applications to supplement learning. Modern Statistics: A Computer-Based Approach with Python is intended for a one- or two-semester advanced undergraduate or graduate course. Because of the foundational nature of the text, it can be combined with any program requiring data analysis in its curriculum, such as courses on data science, industrial statistics, physical and social sciences, and engineering. Researchers, practitioners, and data scientists will also find it to be a useful resource with the numerous applications and case studies that are included. A second, closely related textbook is titled Industrial Statistics: A Computer-Based Approach with Python. It covers topics such as statistical process control, including multivariate methods, the design of experiments, including computer experiments and reliability methods, including Bayesian reliability. These texts can be used independently or for consecutive courses. The mistat Python package can be accessed at https://gedeck.github.io/mistat-code-solutions/ModernStatistics/ "In this book on Modern Statistics, the last two chapters on modern analytic methods contain what is very popular at the moment, especially in Machine Learning, such as classifiers, clustering methods and text analytics. But I also appreciate the previous chapters since I believe that people using machine learning methods should be aware that they rely heavily on statistical ones. I very much appreciate the many worked out cases, based on the longstanding experience of the authors. They are very useful to better understand, and then apply, the methods presented in the book. The use of Python corresponds to the best programming experience nowadays. For all these reasons, I think the book has also a brilliant and impactful future and I commend the authors for that." Professor Fabrizio RuggeriResearch Director at the National Research Council, ItalyPresident of the International Society for Business and Industrial Statistics (ISBIS)Editor-in-Chief of Applied Stochastic Models in Business and Industry (ASMBI)
Methods of Statistical Model Estimation examines the most important and popular methods used to estimate parameters for statistical models and provide informative model summary statistics. Designed for R users, the book is also ideal for anyone wanting to better understand the algorithms used for statistical model fitting. The text presents algorithms for the estimation of a variety of regression procedures using maximum likelihood estimation, iteratively reweighted least squares regression, the EM algorithm, and MCMC sampling. Fully developed, working R code is constructed for each method. The book starts with OLS regression and generalized linear models, building to two-parameter maximum likelihood models for both pooled and panel models. It then covers a random effects model estimated using the EM algorithm and concludes with a Bayesian Poisson model using Metropolis-Hastings sampling. The book's coverage is innovative in several ways. First, the authors use executable computer code to present and connect the theoretical content. Therefore, code is written for clarity of exposition rather than stability or speed of execution. Second, the book focuses on the performance of statistical estimation and downplays algebraic niceties. In both senses, this book is written for people who wish to fit statistical models and understand them. See Professor Hilbe discuss the book.
In the history of mathematics there are many situations in which cal- lations were performed incorrectly for important practical applications. Let us look at some examples, the history of computing the number ? began in Egypt and Babylon about 2000 years BC, since then many mathematicians have calculated ? (e. g. , Archimedes, Ptolemy, Vi` ete, etc. ). The ?rst formula for computing decimal digits of ? was disc- ered by J. Machin (in 1706), who was the ?rst to correctly compute 100 digits of ?. Then many people used his method, e. g. , W. Shanks calculated ? with 707 digits (within 15 years), although due to mistakes only the ?rst 527 were correct. For the next examples, we can mention the history of computing the ?ne-structure constant ? (that was ?rst discovered by A. Sommerfeld), and the mathematical tables, exact - lutions, and formulas, published in many mathematical textbooks, were not veri?ed rigorously [25]. These errors could have a large e?ect on results obtained by engineers. But sometimes, the solution of such problems required such techn- ogy that was not available at that time. In modern mathematics there exist computers that can perform various mathematical operations for which humans are incapable. Therefore the computers can be used to verify the results obtained by humans, to discovery new results, to - provetheresultsthatahumancanobtainwithoutanytechnology. With respectto our example of computing?, we can mention that recently (in 2002) Y. Kanada, Y. Ushiro, H. Kuroda, and M.
State space models have gained tremendous popularity in recent years in as disparate fields as engineering, economics, genetics and ecology. After a detailed introduction to general state space models, this book focuses on dynamic linear models, emphasizing their Bayesian analysis. Whenever possible it is shown how to compute estimates and forecasts in closed form; for more complex models, simulation techniques are used. A final chapter covers modern sequential Monte Carlo algorithms. The book illustrates all the fundamental steps needed to use dynamic linear models in practice, using R. Many detailed examples based on real data sets are provided to show how to set up a specific model, estimate its parameters, and use it for forecasting. All the code used in the book is available online. No prior knowledge of Bayesian statistics or time series analysis is required, although familiarity with basic statistics and R is assumed.
This book provides hands-on guidance for researchers and practitioners in criminal justice and criminology to perform statistical analyses and data visualization in the free and open-source software R. It offers a step-by-step guide for beginners to become familiar with the RStudio platform and tidyverse set of packages. This volume will help users master the fundamentals of the R programming language, providing tutorials in each chapter that lay out research questions and hypotheses centering around a real criminal justice dataset, such as data from the National Survey on Drug Use and Health, National Crime Victimization Survey, Youth Risk Behavior Surveillance System, The Monitoring the Future Study, and The National Youth Survey. Users will also learn how to manipulate common sources of agency data, such as calls-for-service (CFS) data. The end of each chapter includes exercises that reinforce the R tutorial examples, designed to help master the software as well as to provide practice on statistical concepts, data analysis, and interpretation of results. The text can be used as a stand-alone guide to learning R or it can be used as a companion guide to an introductory statistics textbook, such as Basic Statistics in Criminal Justice (2020).
The information contained in this book has served as the basis for a graduate-level biostatistics class at the University of North Carolina at Chapel Hill. The book focuses in the General Linear Model (GLM) theory, stated in matrix terms, which provides a more compact, clear, and unified presentation of regression of ANOVA than do traditional sums of squares and scalar equations. The book contains a balanced treatment of regression and ANOVA yet is very compact. Reflecting current computational practice, most sums of squares formulas and associated theory, especially in ANOVA, are not included. The text contains almost no proofs, despite the presence of a large number of basic theoretical results. Many numerical examples are provided, and include both the SAS code and equivalent mathematical representation needed to produce the outputs that are presented. All exercises involve only “real” data, collected in the course of scientific research. The book is divided into sections covering the following topics:
R for College Mathematics and Statistics encourages the use of R in mathematics and statistics courses. Instructors are no longer limited to ``nice'' functions in calculus classes. They can require reports and homework with graphs. They can do simulations and experiments. R can be useful for student projects, for creating graphics for teaching, as well as for scholarly work. This book presents ways R, which is freely available, can enhance the teaching of mathematics and statistics. R has the potential to help students learn mathematics due to the need for precision, understanding of symbols and functions, and the logical nature of code. Moreover, the text provides students the opportunity for experimenting with concepts in any mathematics course. Features: Does not require previous experience with R Promotes the use of R in typical mathematics and statistics course work Organized by mathematics topics Utilizes an example-based approach Chapters are largely independent of each other
The R language provides a rich environment for working with data, especially data to be used for statistical modeling or graphics. Coupled with the large variety of easily available packages, it allows access to both well-established and experimental statistical techniques. However techniques that might make sense in other languages are often very ine?cient in R, but, due to R's ?- ibility, it is often possible to implement these techniques in R. Generally, the problem with such techniques is that they do not scale properly; that is, as the problem size grows, the methods slow down at a rate that might be unexpected. The goal of this book is to present a wide variety of data - nipulation techniques implemented in R to take advantage of the way that R works, ratherthandirectlyresemblingmethodsusedinotherlanguages. Since this requires a basic notion of how R stores data, the ?rst chapter of the book is devoted to the fundamentals of data in R. The material in this chapter is a prerequisite for understanding the ideas introduced in later chapters. Since one of the ?rst tasks in any project involving data and R is getting the data into R in a way that it will be usable, Chapter 2 covers reading data from a variety of sources (text ?les, spreadsheets, ?les from other programs, etc. ), as well as saving R objects both in native form and in formats that other programs will be able to work with.
The R Companion to Elementary Applied Statistics includes traditional applications covered in elementary statistics courses as well as some additional methods that address questions that might arise during or after the application of commonly used methods. Beginning with basic tasks and computations with R, readers are then guided through ways to bring data into R, manipulate the data as needed, perform common statistical computations and elementary exploratory data analysis tasks, prepare customized graphics, and take advantage of R for a wide range of methods that find use in many elementary applications of statistics. Features: Requires no familiarity with R or programming to begin using this book. Can be used as a resource for a project-based elementary applied statistics course, or for researchers and professionals who wish to delve more deeply into R. Contains an extensive array of examples that illustrate ideas on various ways to use pre-packaged routines, as well as on developing individualized code. Presents quite a few methods that may be considered non-traditional, or advanced. Includes accompanying carefully documented script files that contain code for all examples presented, and more. R is a powerful and free product that is gaining popularity across the scientific community in both the professional and academic arenas. Statistical methods discussed in this book are used to introduce the fundamentals of using R functions and provide ideas for developing further skills in writing R code. These ideas are illustrated through an extensive collection of examples. About the Author: Christopher Hay-Jahans received his Doctor of Arts in mathematics from Idaho State University in 1999. After spending three years at University of South Dakota, he moved to Juneau, Alaska, in 2002 where he has taught a wide range of undergraduate courses at University of Alaska Southeast.
In biological research, the amount of data available to researchers has increased so much over recent years, it is becoming increasingly difficult to understand the current state of the art without some experience and understanding of data analytics and bioinformatics. An Introduction to Bioinformatics with R: A Practical Guide for Biologists leads the reader through the basics of computational analysis of data encountered in modern biological research. With no previous experience with statistics or programming required, readers will develop the ability to plan suitable analyses of biological datasets, and to use the R programming environment to perform these analyses. This is achieved through a series of case studies using R to answer research questions using molecular biology datasets. Broadly applicable statistical methods are explained, including linear and rank-based correlation, distance metrics and hierarchical clustering, hypothesis testing using linear regression, proportional hazards regression for survival data, and principal component analysis. These methods are then applied as appropriate throughout the case studies, illustrating how they can be used to answer research questions. Key Features: * Provides a practical course in computational data analysis suitable for students or researchers with no previous exposure to computer programming. * Describes in detail the theoretical basis for statistical analysis techniques used throughout the textbook, from basic principles * Presents walk-throughs of data analysis tasks using R and example datasets. All R commands are presented and explained in order to enable the reader to carry out these tasks themselves. * Uses outputs from a large range of molecular biology platforms including DNA methylation and genotyping microarrays; RNA-seq, genome sequencing, ChIP-seq and bisulphite sequencing; and high-throughput phenotypic screens. * Gives worked-out examples geared towards problems encountered in cancer research, which can also be applied across many areas of molecular biology and medical research. This book has been developed over years of training biological scientists and clinicians to analyse the large datasets available in their cancer research projects. It is appropriate for use as a textbook or as a practical book for biological scientists looking to gain bioinformatics skills.
Praise for the first edition: "One of my biggest complaints when I teach introductory statistics classes is that it takes me most of the semester to get to the good stuff-inferential statistics. The author manages to do this very quickly....if one were looking for a book that efficiently covers basic statistical methodology and also introduces statistical software [this text] fits the bill." -The American Statistician Applied Statistical Inference with MINITAB, Second Edition distinguishes itself from other introductory statistics textbooks by focusing on the applications of statistics without compromising mathematical rigor. It presents the material in a seamless step-by-step approach so that readers are first introduced to a topic, given the details of the underlying mathematical foundations along with a detailed description of how to interpret the findings, and are shown how to use the statistical software program Minitab to perform the same analysis. Gives readers a solid foundation in how to apply many different statistical methods. MINITAB is fully integrated throughout the text. Includes fully worked out examples so students can easily follow the calculations. Presents many new topics such as one- and two-sample variances, one- and two-sample Poisson rates, and more nonparametric statistics. Features mostly new exercises as well as the addition of Best Practices sections that describe some common pitfalls and provide some practical advice on statistical inference. This book is written to be user-friendly for students and practitioners who are not experts in statistics, but who want to gain a solid understanding of basic statistical inference. This book is oriented towards the practical use of statistics. The examples, discussions, and exercises are based on data and scenarios that are common to students in their everyday lives.
The OMDoc (Open Mathematical Documents) format is a content markup scheme for collections of mathematical documents, including articles, textbooks, interactive books, and courses. OMDoc also serves as the content language for agent communication of mathematical services and a mathematical software bus. This documentation describes version 1.2 of the OMDoc system, the final and mature release of OMDoc 1. The system features modularized language design, OPENMATH and MATHML for the representation of mathematical objects, and has been employed and validated in various applications. Besides a complete and rigorous specification of the OMDoc document format, this book presents an OMDoc primer with paradigmatic examples for many kinds of mathematical documents. Furthermore, various applications, projects, and tool support for OMDoc are discussed. The book will become essential reading for all working mathematicians and mathematics students aspiring to take part in the new worlds of shared mathematical knowledge.
This book constitutes the refereed proceedings of the Second International Congress on Mathematical Software, ICMS 2006, held in Castro Urdiales, Spain in September 2006. The 45 revised full papers presented were carefully reviewed and selected for presentation. The papers are organized in topical sections on new developments on computer algebra packages, interfacing computer algebra on mathematical visualization, software for algebraic geometry and related topics, number-theoretical software, methods in computational number theory, free software for computer algebra, software for optimization on geometric computation, methods and software for computing mathematical functions, access to mathematics on the Web, and general issues.
International Association for Statistical Computing The International Association for Statistical Computing (IASC) is a Section of the International Statistical Institute. The objectives of the Association are to foster world-wide interest in e?ective statistical computing and to - change technical knowledge through international contacts and meetings - tween statisticians, computing professionals, organizations, institutions, g- ernments and the general public. The IASC organises its own Conferences, IASC World Conferences, and COMPSTAT in Europe. The 17th Conference of ERS-IASC, the biennial meeting of European - gional Section of the IASC was held in Rome August 28 - September 1, 2006. This conference took place in Rome exactly 20 years after the 7th COMP- STAT symposium which was held in Rome, in 1986. Previous COMPSTAT conferences were held in: Vienna (Austria, 1974); West-Berlin (Germany, 1976); Leiden (The Netherlands, 1978); Edimbourgh (UK, 1980); Toulouse (France, 1982); Prague (Czechoslovakia, 1984); Rome (Italy, 1986); Copenhagen (Denmark, 1988); Dubrovnik (Yugoslavia, 1990); Neuch atel (Switzerland, 1992); Vienna (Austria,1994); Barcelona (Spain, 1996);Bristol(UK,1998);Utrecht(TheNetherlands,2000);Berlin(Germany, 2002); Prague (Czech Republic, 2004).
This third edition of Braun and Murdoch's bestselling textbook now includes discussion of the use and design principles of the tidyverse packages in R, including expanded coverage of ggplot2, and R Markdown. The expanded simulation chapter introduces the Box-Muller and Metropolis-Hastings algorithms. New examples and exercises have been added throughout. This is the only introduction you'll need to start programming in R, the computing standard for analyzing data. This book comes with real R code that teaches the standards of the language. Unlike other introductory books on the R system, this book emphasizes portable programming skills that apply to most computing languages and techniques used to develop more complex projects. Solutions, datasets, and any errata are available from www.statprogr.science. Worked examples - from real applications - hundreds of exercises, and downloadable code, datasets, and solutions make a complete package for anyone working in or learning practical data science. |
![]() ![]() You may like...
Silver Hollow Ware for Dining Elegance
Richard F. Osterberg
Hardcover
Collaborative Inquiry for Organization…
Abraham B. Shani, David Coghlan
Paperback
R846
Discovery Miles 8 460
Schumpeter's Venture Money
Michael Peneder, Andreas Resch
Hardcover
Reading Contemporary TV Series…
Milosz Wojtyna, Barbara Miceli, …
Hardcover
R1,279
Discovery Miles 12 790
Fabulously 40 And Beyond - Women Coming…
Margie Orford, Karin Schimke
Paperback
Eight Days In July - Inside The Zuma…
Qaanitah Hunter, Kaveel Singh, …
Paperback
![]()
|