![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
Chunyan Li is a course instructor with many years of experience in teaching about time series analysis. His book is essential for students and researchers in oceanography and other subjects in the Earth sciences, looking for a complete coverage of the theory and practice of time series data analysis using MATLAB. This textbook covers the topic's core theory in depth, and provides numerous instructional examples, many drawn directly from the author's own teaching experience, using data files, examples, and exercises. The book explores many concepts, including time; distance on Earth; wind, current, and wave data formats; finding a subset of ship-based data along planned or random transects; error propagation; Taylor series expansion for error estimates; the least squares method; base functions and linear independence of base functions; tidal harmonic analysis; Fourier series and the generalized Fourier transform; filtering techniques: sampling theorems: finite sampling effects; wavelet analysis; and EOF analysis.
The most crucial ability for machine learning and data science is mathematical logic for grasping their essence rather than relying on knowledge or experience. This textbook addresses the fundamentals of kernel methods for machine learning by considering relevant math problems and building R programs. The book's main features are as follows: The content is written in an easy-to-follow and self-contained style. The book includes 100 exercises, which have been carefully selected and refined. As their solutions are provided in the main text, readers can solve all of the exercises by reading the book. The mathematical premises of kernels are proven and the correct conclusions are provided, helping readers to understand the nature of kernels. Source programs and running examples are presented to help readers acquire a deeper understanding of the mathematics used. Once readers have a basic understanding of the functional analysis topics covered in Chapter 2, the applications are discussed in the subsequent chapters. Here, no prior knowledge of mathematics is assumed. This book considers both the kernel for reproducing kernel Hilbert space (RKHS) and the kernel for the Gaussian process; a clear distinction is made between the two.
This book discusses the development of the Rosenbrock-Wanner methods from the origins of the idea to current research with the stable and efficient numerical solution and differential-algebraic systems of equations, still in focus. The reader gets a comprehensive insight into the classical methods as well as into the development and properties of novel W-methods, two-step and exponential Rosenbrock methods. In addition, descriptive applications from the fields of water and hydrogen network simulation and visual computing are presented.
R, an Open Source software, has become the "de facto" statistical computing environment. It has an excellent collection of data manipulation and graphics capabilities. It is extensible and comes with a large number of packages that allow statistical analysis at all levels - from simple to advanced - and in numerous fields including Medicine, Genetics, Biology, Environmental Sciences, Geology, Social Sciences and much more. The software is maintained and developed by academicians and professionals and as such, is continuously evolving and up to date. "Statistics and Data with R" presents an accessible guide to data manipulations, statistical analysis and graphics using R. Assuming no previous knowledge of statistics or R, the book includes: A comprehensive introduction to the R language. An integrated approach to importing and preparing data for analysis, exploring and analyzing the data, and presenting results. Over 300 examples, including detailed explanations of the R scripts used throughout. Over 100 moderately large data sets from disciplines ranging from Biology, Ecology and Environmental Science to Medicine, Law, Military and Social Sciences. A parallel discussion of analyses with the normal density, proportions (binomial), counts (Poisson) and bootstrap methods. Two extensive indexes that include references to every R function (and its arguments and packages used in the book and to every introduced concept. An accompanying Wiki website, http: //turtle.gis.umn.eduincludes all the scripts and data used in the book. The website also features a solutions manual, providing answers to all of the excercises presented in the book. Visitors are invited to download/upload data andscripts and share comments, suggestions and questions with other visitors. Students, researchers and practitioners will find this to be both a valuable learning resource in statistics and R and an excellent reference book.
This book presents the state of the art on numerical semigroups and related subjects, offering different perspectives on research in the field and including results and examples that are very difficult to find in a structured exposition elsewhere. The contents comprise the proceedings of the 2018 INdAM "International Meeting on Numerical Semigroups", held in Cortona, Italy. Talks at the meeting centered not only on traditional types of numerical semigroups, such as Arf or symmetric, and their usual properties, but also on related types of semigroups, such as affine, Puiseux, Weierstrass, and primary, and their applications in other branches of algebra, including semigroup rings, coding theory, star operations, and Hilbert functions. The papers in the book reflect the variety of the talks and derive from research areas including Semigroup Theory, Factorization Theory, Algebraic Geometry, Combinatorics, Commutative Algebra, Coding Theory, and Number Theory. The book is intended for researchers and students who want to learn about recent developments in the theory of numerical semigroups and its connections with other research fields.
This book covers the whole range of numerical mathematics--from linear equations to ordinary differential equations--and details the calculus of errors and partial differential equations. In attempting to give a unified approach of theory, algorithms, applications, and use of software, the book contains many helpful examples and applications. Topics include linear optimization, numerical integration, initial value problems, and nonlinear equations. The book is appearing simultaneously with the problem-solving environment PAN, a system that contains an enlarged hypertext version of the text together with all of the programs described in the book, help systems, and utility tools. (PAN is licensed public domain software.) The text is ideally suited as an introduction to numerical methods and programming for undergraduates in computer science, engineering, and mathematics. It will also be useful to software engineers using NAG libraries and numerical algorithms.
Genstat 5 Release 3 is a version of the statistical system developed by practising statisticians at Rothamsted Experimental Station. It provides statistical summary, analysis, data-handling, and graphics for interactive or batch users, and includes a customizable menu-based interface. Genstat is used worldwide on personal computers, workstations, and mainframe computers by statisticians, research workers, and students in all fields of application of statistics. Release 3 contains many new facilities: the analysis of ordered categorical data: generalized additive models; combination of information in multi-stratum experimental designs; extensions to the REML (residual maximum-likelihood) algorithm for testing fixed effects and to cater for correlation strucgures between random effects; estimation of paramenters of statistical distributions; further probability functions; simplified data input; and many more extensions, in high-resolution graphics, for calculations, and for manipulation. The manual has been rewritten for this release, including new chapters on Basic Statistics and REML, with extensive examples and illustrations. The text is suitable for users of Genstat 5 i.e. statis
This book explores missing data techniques and provides a detailed and easy-to-read introduction to multiple imputation, covering the theoretical aspects of the topic and offering hands-on help with the implementation. It discusses the pros and cons of various techniques and concepts, including multiple imputation quality diagnostics, an important topic for practitioners. It also presents current research and new, practically relevant developments in the field, and demonstrates the use of recent multiple imputation techniques designed for situations where distributional assumptions of the classical multiple imputation solutions are violated. In addition, the book features numerous practical tutorials for widely used R software packages to generate multiple imputations (norm, pan and mice). The provided R code and data sets allow readers to reproduce all the examples and enhance their understanding of the procedures. This book is intended for social and health scientists and other quantitative researchers who analyze incompletely observed data sets, as well as master's and PhD students with a sound basic knowledge of statistics.
Cohesively Incorporates Statistical Theory with R Implementation Since the publication of the popular first edition of this comprehensive textbook, the contributed R packages on CRAN have increased from around 1,000 to over 6,000. Designed for an intermediate undergraduate course, Probability and Statistics with R, Second Edition explores how some of these new packages make analysis easier and more intuitive as well as create more visually pleasing graphs. New to the Second Edition Improvements to existing examples, problems, concepts, data, and functions New examples and exercises that use the most modern functions Coverage probability of a confidence interval and model validation Highlighted R code for calculations and graph creation Gets Students Up to Date on Practical Statistical Topics Keeping pace with today's statistical landscape, this textbook expands your students' knowledge of the practice of statistics. It effectively links statistical concepts with R procedures, empowering students to solve a vast array of real statistical problems with R. Web Resources A supplementary website offers solutions to odd exercises and templates for homework assignments while the data sets and R functions are available on CRAN.
This book presents an introduction to structural equation modeling (SEM) and facilitates the access of students and researchers in various scientific fields to this powerful statistical tool. It offers a didactic initiation to SEM as well as to the open-source software, lavaan, and the rich and comprehensive technical features it offers. Structural Equation Modeling with lavaan thus helps the reader to gain autonomy in the use of SEM to test path models and dyadic models, perform confirmatory factor analyses and estimate more complex models such as general structural models with latent variables and latent growth models. SEM is approached both from the point of view of its process (i.e. the different stages of its use) and from the point of view of its product (i.e. the results it generates and their reading).
As explored in this open access book, higher education in STEM fields is influenced by many factors, including education research, government and school policies, financial considerations, technology limitations, and acceptance of innovations by faculty and students. In 2018, Drs. Ryoo and Winkelmann explored the opportunities, challenges, and future research initiatives of innovative learning environments (ILEs) in higher education STEM disciplines in their pioneering project: eXploring the Future of Innovative Learning Environments (X-FILEs). Workshop participants evaluated four main ILE categories: personalized and adaptive learning, multimodal learning formats, cross/extended reality (XR), and artificial intelligence (AI) and machine learning (ML). This open access book gathers the perspectives expressed during the X-FILEs workshop and its follow-up activities. It is designed to help inform education policy makers, researchers, developers, and practitioners about the adoption and implementation of ILEs in higher education.
This text examines the goals of data analysis with respect to enhancing knowledge, and identifies data summarization and correlation analysis as the core issues. Data summarization, both quantitative and categorical, is treated within the encoder-decoder paradigm bringing forward a number of mathematically supported insights into the methods and relations between them. Two Chapters describe methods for categorical summarization: partitioning, divisive clustering and separate cluster finding and another explain the methods for quantitative summarization, Principal Component Analysis and PageRank. Features: * An in-depth presentation of K-means partitioning including a corresponding Pythagorean decomposition of the data scatter. * Advice regarding such issues as clustering of categorical and mixed scale data, similarity and network data, interpretation aids, anomalous clusters, the number of clusters, etc. * Thorough attention to data-driven modelling including a number of mathematically stated relations between statistical and geometrical concepts including those between goodness-of-fit criteria for decision trees and data standardization, similarity and consensus clustering, modularity clustering and uniform partitioning. New edition highlights: * Inclusion of ranking issues such as Google PageRank, linear stratification and tied rankings median, consensus clustering, semi-average clustering, one-cluster clustering * Restructured to make the logics more straightforward and sections self-contained Core Data Analysis: Summarization, Correlation and Visualization is aimed at those who are eager to participate in developing the field as well as appealing to novices and practitioners.
Recent data shows that 87% of Artificial Intelligence/Big Data projects don't make it into production (VB Staff, 2019), meaning that most projects are never deployed. This book addresses five common pitfalls that prevent projects from reaching deployment and provides tools and methods to avoid those pitfalls. Along the way, stories from actual experience in building and deploying data science projects are shared to illustrate the methods and tools. While the book is primarily for data science practitioners, information for managers of data science practitioners is included in the Tips for Managers sections.
This book provides an introduction to quantitative marketing with Python. The book presents a hands-on approach to using Python for real marketing questions, organized by key topic areas. Following the Python scientific computing movement toward reproducible research, the book presents all analyses in Colab notebooks, which integrate code, figures, tables, and annotation in a single file. The code notebooks for each chapter may be copied, adapted, and reused in one's own analyses. The book also introduces the usage of machine learning predictive models using the Python sklearn package in the context of marketing research. This book is designed for three groups of readers: experienced marketing researchers who wish to learn to program in Python, coming from tools and languages such as R, SAS, or SPSS; analysts or students who already program in Python and wish to learn about marketing applications; and undergraduate or graduate marketing students with little or no programming background. It presumes only an introductory level of familiarity with formal statistics and contains a minimum of mathematics.
This book provides a general introduction to Sequential Monte Carlo (SMC) methods, also known as particle filters. These methods have become a staple for the sequential analysis of data in such diverse fields as signal processing, epidemiology, machine learning, population ecology, quantitative finance, and robotics. The coverage is comprehensive, ranging from the underlying theory to computational implementation, methodology, and diverse applications in various areas of science. This is achieved by describing SMC algorithms as particular cases of a general framework, which involves concepts such as Feynman-Kac distributions, and tools such as importance sampling and resampling. This general framework is used consistently throughout the book. Extensive coverage is provided on sequential learning (filtering, smoothing) of state-space (hidden Markov) models, as this remains an important application of SMC methods. More recent applications, such as parameter estimation of these models (through e.g. particle Markov chain Monte Carlo techniques) and the simulation of challenging probability distributions (in e.g. Bayesian inference or rare-event problems), are also discussed. The book may be used either as a graduate text on Sequential Monte Carlo methods and state-space modeling, or as a general reference work on the area. Each chapter includes a set of exercises for self-study, a comprehensive bibliography, and a "Python corner," which discusses the practical implementation of the methods covered. In addition, the book comes with an open source Python library, which implements all the algorithms described in the book, and contains all the programs that were used to perform the numerical experiments.
The second edition of a bestselling textbook, Using R for Introductory Statistics guides students through the basics of R, helping them overcome the sometimes steep learning curve. The author does this by breaking the material down into small, task-oriented steps. The second edition maintains the features that made the first edition so popular, while updating data, examples, and changes to R in line with the current version. See What's New in the Second Edition: Increased emphasis on more idiomatic R provides a grounding in the functionality of base R. Discussions of the use of RStudio helps new R users avoid as many pitfalls as possible. Use of knitr package makes code easier to read and therefore easier to reason about. Additional information on computer-intensive approaches motivates the traditional approach. Updated examples and data make the information current and topical. The book has an accompanying package, UsingR, available from CRAN, R's repository of user-contributed packages. The package contains the data sets mentioned in the text (data(package="UsingR")), answers to selected problems (answers()), a few demonstrations (demo()), the errata (errata()), and sample code from the text. The topics of this text line up closely with traditional teaching progression; however, the book also highlights computer-intensive approaches to motivate the more traditional approach. The authors emphasize realistic data and examples and rely on visualization techniques to gather insight. They introduce statistics and R seamlessly, giving students the tools they need to use R and the information they need to navigate the sometimes complex world of statistical computing.
This book presents an introduction to linear univariate and multivariate time series analysis, providing brief theoretical insights into each topic, and from the beginning illustrating the theory with software examples. As such, it quickly introduces readers to the peculiarities of each subject from both theoretical and the practical points of view. It also includes numerous examples and real-world applications that demonstrate how to handle different types of time series data. The associated software package, SSMMATLAB, is written in MATLAB and also runs on the free OCTAVE platform. The book focuses on linear time series models using a state space approach, with the Kalman filter and smoother as the main tools for model estimation, prediction and signal extraction. A chapter on state space models describes these tools and provides examples of their use with general state space models. Other topics discussed in the book include ARIMA; and transfer function and structural models; as well as signal extraction using the canonical decomposition in the univariate case, and VAR, VARMA, cointegrated VARMA, VARX, VARMAX, and multivariate structural models in the multivariate case. It also addresses spectral analysis, the use of fixed filters in a model-based approach, and automatic model identification procedures for ARIMA and transfer function models in the presence of outliers, interventions, complex seasonal patterns and other effects like Easter, trading day, etc. This book is intended for both students and researchers in various fields dealing with time series. The software provides numerous automatic procedures to handle common practical situations, but at the same time, readers with programming skills can write their own programs to deal with specific problems. Although the theoretical introduction to each topic is kept to a minimum, readers can consult the companion book 'Multivariate Time Series With Linear State Space Structure', by the same author, if they require more details.
This book introduces readers to various signal processing models that have been used in analyzing periodic data, and discusses the statistical and computational methods involved. Signal processing can broadly be considered to be the recovery of information from physical observations. The received signals are usually disturbed by thermal, electrical, atmospheric or intentional interferences, and due to their random nature, statistical techniques play an important role in their analysis. Statistics is also used in the formulation of appropriate models to describe the behavior of systems, the development of appropriate techniques for estimation of model parameters and the assessment of the model performances. Analyzing different real-world data sets to illustrate how different models can be used in practice, and highlighting open problems for future research, the book is a valuable resource for senior undergraduate and graduate students specializing in mathematics or statistics.
Testing for economic convergence across countries has been a central issue in the literature of economic growth and development. This book introduces a modern framework to study the cross-country convergence dynamics in labor productivity and its proximate sources: capital accumulation and aggregate efficiency. In particular, recent convergence dynamics of developed as well as developing countries are evaluated through the lens of a non-linear dynamic factor model and a clustering algorithm for panel data. This framework allows us to examine key economic phenomena such as technological heterogeneity and multiple equilibria. In this context, the book provides a succinct review of the recent club convergence literature, a comparative view of developed and developing countries, and a tutorial on how to implement the club convergence framework in the statistical software Stata.
This Springer brief addresses the challenges encountered in the study of the optimization of time-nonhomogeneous Markov chains. It develops new insights and new methodologies for systems in which concepts such as stationarity, ergodicity, periodicity and connectivity do not apply. This brief introduces the novel concept of confluencity and applies a relative optimization approach. It develops a comprehensive theory for optimization of the long-run average of time-nonhomogeneous Markov chains. The book shows that confluencity is the most fundamental concept in optimization, and that relative optimization is more suitable for treating the systems under consideration than standard ideas of dynamic programming. Using confluencity and relative optimization, the author classifies states as confluent or branching and shows how the under-selectivity issue of the long-run average can be easily addressed, multi-class optimization implemented, and Nth biases and Blackwell optimality conditions derived. These results are presented in a book for the first time and so may enhance the understanding of optimization and motivate new research ideas in the area.
Nonlinear Parameter Optimization Using RJohn C. Nash, Telfer School of Management, University of Ottawa, Canada A systematic and comprehensive treatment of optimization software using RIn recent decades, optimization techniques have been streamlined by computational and artificial intelligence methods to analyze more variables, especially under non-linear, multivariable conditions, more quickly than ever before.Optimization is an important tool for decision science and for the analysis of physical systems used in engineering. Nonlinear Parameter Optimization with R explores the principal tools available in R for function minimization, optimization, and nonlinear parameter determination and features numerous examples throughout. Nonlinear Parameter Optimization with R: - Provides a comprehensive treatment of optimization techniques- Examines optimization problems that arise in statistics and how to solve them using R- Enables researchers and practitioners to solve parameter determination problems- Presents traditional methods as well as recent developments in R- Is supported by an accompanying website featuring R code, examples and datasets Researchers and practitioners who have to solve parameter determination problems who are users of R but are novices in the field optimization or function minimization will benefit from this book. It will also be useful for scientists building and estimating nonlinear models in various fields such as hydrology, sports forecasting, ecology, chemical engineering, pharmaco-kinetics, agriculture, economics and statistics.
The objective of this text is to introduce RStudio to practitioners and students and enable them to use R in their everyday work. It is not a statistical textbook, the purpose is to transmit the joy of analyzing data with RStudio. Practitioners and students learn how RStudio can be installed and used, they learn to import data, write scripts and save working results. Furthermore, they learn to employ descriptive statistics and create graphics with RStudio. Additionally, it is shown how RStudio can be used to test hypotheses, run an analysis of variance and regressions. To deepen the learned content, tasks are included with the solutions provided at the end of the textbook. This textbook has been recommended and developed for university courses in Germany, Austria and Switzerland.
Now in its second edition, Text Analysis with R provides a practical introduction to computational text analysis using the open source programming language R. R is an extremely popular programming language, used throughout the sciences; due to its accessibility, R is now used increasingly in other research areas. In this volume, readers immediately begin working with text, and each chapter examines a new technique or process, allowing readers to obtain a broad exposure to core R procedures and a fundamental understanding of the possibilities of computational text analysis at both the micro and the macro scale. Each chapter builds on its predecessor as readers move from small scale "microanalysis" of single texts to large scale "macroanalysis" of text corpora, and each concludes with a set of practice exercises that reinforce and expand upon the chapter lessons. The book's focus is on making the technical palatable and making the technical useful and immediately gratifying. Text Analysis with R is written with students and scholars of literature in mind but will be applicable to other humanists and social scientists wishing to extend their methodological toolkit to include quantitative and computational approaches to the study of text. Computation provides access to information in text that readers simply cannot gather using traditional qualitative methods of close reading and human synthesis. This new edition features two new chapters: one that introduces dplyr and tidyr in the context of parsing and analyzing dramatic texts to extract speaker and receiver data, and one on sentiment analysis using the syuzhet package. It is also filled with updated material in every chapter to integrate new developments in the field, current practices in R style, and the use of more efficient algorithms.
Program for data analysis using R and learn practical skills to make your work more efficient. This revised book explores how to automate running code and the creation of reports to share your results, as well as writing functions and packages. It includes key R 4 features such as a new color palette for charts, an enhanced reference counting system, and normalization of matrix and array types where matrix objects now formally inherit from the array class, eliminating inconsistencies. Advanced R 4 Data Programming and the Cloud is not designed to teach advanced R programming nor to teach the theory behind statistical procedures. Rather, it is designed to be a practical guide moving beyond merely using R; it shows you how to program in R to automate tasks. This book will teach you how to manipulate data in modern R structures and includes connecting R to databases such as PostgreSQL, cloud services such as Amazon Web Services (AWS), and digital dashboards such as Shiny. Each chapter also includes a detailed bibliography with references to research articles and other resources that cover relevant conceptual and theoretical topics. What You Will Learn Write and document R functions using R 4 Make an R package and share it via GitHub or privately Add tests to R code to ensure it works as intended Use R to talk directly to databases and do complex data management Run R in the Amazon cloud Deploy a Shiny digital dashboard Generate presentation-ready tables and reports using R Who This Book Is For Working professionals, researchers, and students who are familiar with R and basic statistical techniques such as linear regression and who want to learn how to take their R coding and programming to the next level.
This textbook will familiarize students in economics and business, as well as practitioners, with the basic principles, techniques, and applications of applied statistics, statistical testing, and multivariate data analysis. Drawing on practical examples from the business world, it demonstrates the methods of univariate, bivariate, and multivariate statistical analysis. The textbook covers a range of topics, from data collection and scaling to the presentation and simple univariate analysis of quantitative data, while also providing advanced analytical procedures for assessing multivariate relationships. Accordingly, it addresses all topics typically covered in university courses on statistics and advanced applied data analysis. In addition, it does not limit itself to presenting applied methods, but also discusses the related use of Excel, SPSS, and Stata. |
![]() ![]() You may like...
Proceedings of the International…
Andrea Matta, Jingshan Li, …
Hardcover
Operator Methods in Ordinary and Partial…
Sergio Albeverio, Nils Elander, …
Hardcover
R2,624
Discovery Miles 26 240
Boundary Elements and other Mesh…
A. H.-D. Cheng, S. Syngellakis
Hardcover
R3,368
Discovery Miles 33 680
Linux Device Drivers
Jonathan Corbet, Alessandro Rubini, …
Paperback
|