![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
This book offers a systematic and rigorous treatment of continuous-time Markov decision processes, covering both theory and possible applications to queueing systems, epidemiology, finance, and other fields. Unlike most books on the subject, much attention is paid to problems with functional constraints and the realizability of strategies. Three major methods of investigations are presented, based on dynamic programming, linear programming, and reduction to discrete-time problems. Although the main focus is on models with total (discounted or undiscounted) cost criteria, models with average cost criteria and with impulsive controls are also discussed in depth. The book is self-contained. A separate chapter is devoted to Markov pure jump processes and the appendices collect the requisite background on real analysis and applied probability. All the statements in the main text are proved in detail. Researchers and graduate students in applied probability, operational research, statistics and engineering will find this monograph interesting, useful and valuable.
Text Mining with MATLAB (R) provides a comprehensive introduction to text mining using MATLAB. It is designed to help text mining practitioners, as well as those with little-to-no experience with text mining in general, familiarize themselves with MATLAB and its complex applications. The book is structured in three main parts: The first part, Fundamentals, introduces basic procedures and methods for manipulating and operating with text within the MATLAB programming environment. The second part of the book, Mathematical Models, is devoted to motivating, introducing, and explaining the two main paradigms of mathematical models most commonly used for representing text data: the statistical and the geometrical approach. Eventually, the third part of the book, Techniques and Applications, addresses general problems in text mining and natural language processing applications such as document categorization, document search, content analysis, summarization, question answering, and conversational systems. This second edition includes updates in line with the recently released "Text Analytics Toolbox" within the MATLAB product and introduces three new chapters and six new sections in existing ones. All descriptions presented are supported with practical examples that are fully reproducible. Further reading, as well as additional exercises and projects, are proposed at the end of each chapter for those readers interested in conducting further experimentation.
Chunyan Li is a course instructor with many years of experience in teaching about time series analysis. His book is essential for students and researchers in oceanography and other subjects in the Earth sciences, looking for a complete coverage of the theory and practice of time series data analysis using MATLAB. This textbook covers the topic's core theory in depth, and provides numerous instructional examples, many drawn directly from the author's own teaching experience, using data files, examples, and exercises. The book explores many concepts, including time; distance on Earth; wind, current, and wave data formats; finding a subset of ship-based data along planned or random transects; error propagation; Taylor series expansion for error estimates; the least squares method; base functions and linear independence of base functions; tidal harmonic analysis; Fourier series and the generalized Fourier transform; filtering techniques: sampling theorems: finite sampling effects; wavelet analysis; and EOF analysis.
This book introduces the basic methodologies for successful data analytics. Matrix optimization and approximation are explained in detail and extensively applied to dimensionality reduction by principal component analysis and multidimensional scaling. Diffusion maps and spectral clustering are derived as powerful tools. The methodological overlap between data science and machine learning is emphasized by demonstrating how data science is used for classification as well as supervised and unsupervised learning.
An SPSS Companion for the Third Edition of The Fundamentals of Political Science Research offers students a chance to delve into the world of SPSS using real political science data sets and statistical analysis techniques directly from Paul M. Kellstedt and Guy D. Whitten's best-selling textbook. Built in parallel with the main text, this workbook teaches students to apply the techniques they learn in each chapter by reproducing the analyses and results from each lesson using SPSS. Students will also learn to create all of the tables and figures found in the textbook, leading to an even greater mastery of the core material. This accessible, informative, and engaging companion walks through the use of SPSS step-by-step, using command lines and screenshots to demonstrate proper use of the software. With the help of these guides, students will become comfortable creating, editing, and using data sets in SPSS to produce original statistical analyses for evaluating causal claims. End-of-chapter exercises encourage this innovation by asking students to formulate and evaluate their own hypotheses.
This book introduces readers to various signal processing models that have been used in analyzing periodic data, and discusses the statistical and computational methods involved. Signal processing can broadly be considered to be the recovery of information from physical observations. The received signals are usually disturbed by thermal, electrical, atmospheric or intentional interferences, and due to their random nature, statistical techniques play an important role in their analysis. Statistics is also used in the formulation of appropriate models to describe the behavior of systems, the development of appropriate techniques for estimation of model parameters and the assessment of the model performances. Analyzing different real-world data sets to illustrate how different models can be used in practice, and highlighting open problems for future research, the book is a valuable resource for senior undergraduate and graduate students specializing in mathematics or statistics.
This open access book presents a set of basic techniques for estimating the benefit of IT development projects and portfolios. It also offers methods for monitoring how much of that estimated benefit is being achieved during projects. Readers can then use these benefit estimates together with cost estimates to create a benefit/cost index to help them decide which functionalities to send into construction and in what order. This allows them to focus on constructing the functionality that offers the best value for money at an early stage. Although benefits management involves a wide range of activities in addition to estimation and monitoring, the techniques in this book provides a clear guide to achieving what has always been the goal of project and portfolio stakeholders: developing systems that produce as much usefulness and value as possible for the money invested. The techniques can also help deal with vicarious motives and obstacles that prevent this happening. The book equips readers to recognize when a project budget should not be spent in full and resources be allocated elsewhere in a portfolio instead. It also provides development managers and upper management with common ground as a basis for making informed decisions.
The goal of this book is to gather in a single work the most relevant concepts related in optimization methods, showing how such theories and methods can be addressed using the open source, multi-platform R tool. Modern optimization methods, also known as metaheuristics, are particularly useful for solving complex problems for which no specialized optimization algorithm has been developed. These methods often yield high quality solutions with a more reasonable use of computational resources (e.g. memory and processing effort). Examples of popular modern methods discussed in this book are: simulated annealing; tabu search; genetic algorithms; differential evolution; and particle swarm optimization. This book is suitable for undergraduate and graduate students in computer science, information technology, and related areas, as well as data analysts interested in exploring modern optimization methods using R. This new edition integrates the latest R packages through text and code examples. It also discusses new topics, such as: the impact of artificial intelligence and business analytics in modern optimization tasks; the creation of interactive Web applications; usage of parallel computing; and more modern optimization algorithms (e.g., iterated racing, ant colony optimization, grammatical evolution).
This advanced textbook explores small area estimation techniques, covers the underlying mathematical and statistical theory and offers hands-on support with their implementation. It presents the theory in a rigorous way and compares and contrasts various statistical methodologies, helping readers understand how to develop new methodologies for small area estimation. It also includes numerous sample applications of small area estimation techniques. The underlying R code is provided in the text and applied to four datasets that mimic data from labor markets and living conditions surveys, where the socioeconomic indicators include the small area estimation of total unemployment, unemployment rates, average annual household incomes and poverty indicators. Given its scope, the book will be useful for master and PhD students, and for official and other applied statisticians.
R, an Open Source software, has become the "de facto" statistical computing environment. It has an excellent collection of data manipulation and graphics capabilities. It is extensible and comes with a large number of packages that allow statistical analysis at all levels - from simple to advanced - and in numerous fields including Medicine, Genetics, Biology, Environmental Sciences, Geology, Social Sciences and much more. The software is maintained and developed by academicians and professionals and as such, is continuously evolving and up to date. "Statistics and Data with R" presents an accessible guide to data manipulations, statistical analysis and graphics using R. Assuming no previous knowledge of statistics or R, the book includes: A comprehensive introduction to the R language. An integrated approach to importing and preparing data for analysis, exploring and analyzing the data, and presenting results. Over 300 examples, including detailed explanations of the R scripts used throughout. Over 100 moderately large data sets from disciplines ranging from Biology, Ecology and Environmental Science to Medicine, Law, Military and Social Sciences. A parallel discussion of analyses with the normal density, proportions (binomial), counts (Poisson) and bootstrap methods. Two extensive indexes that include references to every R function (and its arguments and packages used in the book and to every introduced concept. An accompanying Wiki website, http: //turtle.gis.umn.eduincludes all the scripts and data used in the book. The website also features a solutions manual, providing answers to all of the excercises presented in the book. Visitors are invited to download/upload data andscripts and share comments, suggestions and questions with other visitors. Students, researchers and practitioners will find this to be both a valuable learning resource in statistics and R and an excellent reference book.
This book discusses the development of the Rosenbrock-Wanner methods from the origins of the idea to current research with the stable and efficient numerical solution and differential-algebraic systems of equations, still in focus. The reader gets a comprehensive insight into the classical methods as well as into the development and properties of novel W-methods, two-step and exponential Rosenbrock methods. In addition, descriptive applications from the fields of water and hydrogen network simulation and visual computing are presented.
This book covers the whole range of numerical mathematics--from linear equations to ordinary differential equations--and details the calculus of errors and partial differential equations. In attempting to give a unified approach of theory, algorithms, applications, and use of software, the book contains many helpful examples and applications. Topics include linear optimization, numerical integration, initial value problems, and nonlinear equations. The book is appearing simultaneously with the problem-solving environment PAN, a system that contains an enlarged hypertext version of the text together with all of the programs described in the book, help systems, and utility tools. (PAN is licensed public domain software.) The text is ideally suited as an introduction to numerical methods and programming for undergraduates in computer science, engineering, and mathematics. It will also be useful to software engineers using NAG libraries and numerical algorithms.
This book presents the state of the art on numerical semigroups and related subjects, offering different perspectives on research in the field and including results and examples that are very difficult to find in a structured exposition elsewhere. The contents comprise the proceedings of the 2018 INdAM "International Meeting on Numerical Semigroups", held in Cortona, Italy. Talks at the meeting centered not only on traditional types of numerical semigroups, such as Arf or symmetric, and their usual properties, but also on related types of semigroups, such as affine, Puiseux, Weierstrass, and primary, and their applications in other branches of algebra, including semigroup rings, coding theory, star operations, and Hilbert functions. The papers in the book reflect the variety of the talks and derive from research areas including Semigroup Theory, Factorization Theory, Algebraic Geometry, Combinatorics, Commutative Algebra, Coding Theory, and Number Theory. The book is intended for researchers and students who want to learn about recent developments in the theory of numerical semigroups and its connections with other research fields.
Genstat 5 Release 3 is a version of the statistical system developed by practising statisticians at Rothamsted Experimental Station. It provides statistical summary, analysis, data-handling, and graphics for interactive or batch users, and includes a customizable menu-based interface. Genstat is used worldwide on personal computers, workstations, and mainframe computers by statisticians, research workers, and students in all fields of application of statistics. Release 3 contains many new facilities: the analysis of ordered categorical data: generalized additive models; combination of information in multi-stratum experimental designs; extensions to the REML (residual maximum-likelihood) algorithm for testing fixed effects and to cater for correlation strucgures between random effects; estimation of paramenters of statistical distributions; further probability functions; simplified data input; and many more extensions, in high-resolution graphics, for calculations, and for manipulation. The manual has been rewritten for this release, including new chapters on Basic Statistics and REML, with extensive examples and illustrations. The text is suitable for users of Genstat 5 i.e. statis
Cohesively Incorporates Statistical Theory with R Implementation Since the publication of the popular first edition of this comprehensive textbook, the contributed R packages on CRAN have increased from around 1,000 to over 6,000. Designed for an intermediate undergraduate course, Probability and Statistics with R, Second Edition explores how some of these new packages make analysis easier and more intuitive as well as create more visually pleasing graphs. New to the Second Edition Improvements to existing examples, problems, concepts, data, and functions New examples and exercises that use the most modern functions Coverage probability of a confidence interval and model validation Highlighted R code for calculations and graph creation Gets Students Up to Date on Practical Statistical Topics Keeping pace with today's statistical landscape, this textbook expands your students' knowledge of the practice of statistics. It effectively links statistical concepts with R procedures, empowering students to solve a vast array of real statistical problems with R. Web Resources A supplementary website offers solutions to odd exercises and templates for homework assignments while the data sets and R functions are available on CRAN.
This book explores missing data techniques and provides a detailed and easy-to-read introduction to multiple imputation, covering the theoretical aspects of the topic and offering hands-on help with the implementation. It discusses the pros and cons of various techniques and concepts, including multiple imputation quality diagnostics, an important topic for practitioners. It also presents current research and new, practically relevant developments in the field, and demonstrates the use of recent multiple imputation techniques designed for situations where distributional assumptions of the classical multiple imputation solutions are violated. In addition, the book features numerous practical tutorials for widely used R software packages to generate multiple imputations (norm, pan and mice). The provided R code and data sets allow readers to reproduce all the examples and enhance their understanding of the procedures. This book is intended for social and health scientists and other quantitative researchers who analyze incompletely observed data sets, as well as master's and PhD students with a sound basic knowledge of statistics.
This text examines the goals of data analysis with respect to enhancing knowledge, and identifies data summarization and correlation analysis as the core issues. Data summarization, both quantitative and categorical, is treated within the encoder-decoder paradigm bringing forward a number of mathematically supported insights into the methods and relations between them. Two Chapters describe methods for categorical summarization: partitioning, divisive clustering and separate cluster finding and another explain the methods for quantitative summarization, Principal Component Analysis and PageRank. Features: * An in-depth presentation of K-means partitioning including a corresponding Pythagorean decomposition of the data scatter. * Advice regarding such issues as clustering of categorical and mixed scale data, similarity and network data, interpretation aids, anomalous clusters, the number of clusters, etc. * Thorough attention to data-driven modelling including a number of mathematically stated relations between statistical and geometrical concepts including those between goodness-of-fit criteria for decision trees and data standardization, similarity and consensus clustering, modularity clustering and uniform partitioning. New edition highlights: * Inclusion of ranking issues such as Google PageRank, linear stratification and tied rankings median, consensus clustering, semi-average clustering, one-cluster clustering * Restructured to make the logics more straightforward and sections self-contained Core Data Analysis: Summarization, Correlation and Visualization is aimed at those who are eager to participate in developing the field as well as appealing to novices and practitioners.
This book provides a general introduction to Sequential Monte Carlo (SMC) methods, also known as particle filters. These methods have become a staple for the sequential analysis of data in such diverse fields as signal processing, epidemiology, machine learning, population ecology, quantitative finance, and robotics. The coverage is comprehensive, ranging from the underlying theory to computational implementation, methodology, and diverse applications in various areas of science. This is achieved by describing SMC algorithms as particular cases of a general framework, which involves concepts such as Feynman-Kac distributions, and tools such as importance sampling and resampling. This general framework is used consistently throughout the book. Extensive coverage is provided on sequential learning (filtering, smoothing) of state-space (hidden Markov) models, as this remains an important application of SMC methods. More recent applications, such as parameter estimation of these models (through e.g. particle Markov chain Monte Carlo techniques) and the simulation of challenging probability distributions (in e.g. Bayesian inference or rare-event problems), are also discussed. The book may be used either as a graduate text on Sequential Monte Carlo methods and state-space modeling, or as a general reference work on the area. Each chapter includes a set of exercises for self-study, a comprehensive bibliography, and a "Python corner," which discusses the practical implementation of the methods covered. In addition, the book comes with an open source Python library, which implements all the algorithms described in the book, and contains all the programs that were used to perform the numerical experiments.
As explored in this open access book, higher education in STEM fields is influenced by many factors, including education research, government and school policies, financial considerations, technology limitations, and acceptance of innovations by faculty and students. In 2018, Drs. Ryoo and Winkelmann explored the opportunities, challenges, and future research initiatives of innovative learning environments (ILEs) in higher education STEM disciplines in their pioneering project: eXploring the Future of Innovative Learning Environments (X-FILEs). Workshop participants evaluated four main ILE categories: personalized and adaptive learning, multimodal learning formats, cross/extended reality (XR), and artificial intelligence (AI) and machine learning (ML). This open access book gathers the perspectives expressed during the X-FILEs workshop and its follow-up activities. It is designed to help inform education policy makers, researchers, developers, and practitioners about the adoption and implementation of ILEs in higher education.
Recent data shows that 87% of Artificial Intelligence/Big Data projects don't make it into production (VB Staff, 2019), meaning that most projects are never deployed. This book addresses five common pitfalls that prevent projects from reaching deployment and provides tools and methods to avoid those pitfalls. Along the way, stories from actual experience in building and deploying data science projects are shared to illustrate the methods and tools. While the book is primarily for data science practitioners, information for managers of data science practitioners is included in the Tips for Managers sections.
The second edition of a bestselling textbook, Using R for Introductory Statistics guides students through the basics of R, helping them overcome the sometimes steep learning curve. The author does this by breaking the material down into small, task-oriented steps. The second edition maintains the features that made the first edition so popular, while updating data, examples, and changes to R in line with the current version. See What's New in the Second Edition: Increased emphasis on more idiomatic R provides a grounding in the functionality of base R. Discussions of the use of RStudio helps new R users avoid as many pitfalls as possible. Use of knitr package makes code easier to read and therefore easier to reason about. Additional information on computer-intensive approaches motivates the traditional approach. Updated examples and data make the information current and topical. The book has an accompanying package, UsingR, available from CRAN, R's repository of user-contributed packages. The package contains the data sets mentioned in the text (data(package="UsingR")), answers to selected problems (answers()), a few demonstrations (demo()), the errata (errata()), and sample code from the text. The topics of this text line up closely with traditional teaching progression; however, the book also highlights computer-intensive approaches to motivate the more traditional approach. The authors emphasize realistic data and examples and rely on visualization techniques to gather insight. They introduce statistics and R seamlessly, giving students the tools they need to use R and the information they need to navigate the sometimes complex world of statistical computing.
This book presents an introduction to linear univariate and multivariate time series analysis, providing brief theoretical insights into each topic, and from the beginning illustrating the theory with software examples. As such, it quickly introduces readers to the peculiarities of each subject from both theoretical and the practical points of view. It also includes numerous examples and real-world applications that demonstrate how to handle different types of time series data. The associated software package, SSMMATLAB, is written in MATLAB and also runs on the free OCTAVE platform. The book focuses on linear time series models using a state space approach, with the Kalman filter and smoother as the main tools for model estimation, prediction and signal extraction. A chapter on state space models describes these tools and provides examples of their use with general state space models. Other topics discussed in the book include ARIMA; and transfer function and structural models; as well as signal extraction using the canonical decomposition in the univariate case, and VAR, VARMA, cointegrated VARMA, VARX, VARMAX, and multivariate structural models in the multivariate case. It also addresses spectral analysis, the use of fixed filters in a model-based approach, and automatic model identification procedures for ARIMA and transfer function models in the presence of outliers, interventions, complex seasonal patterns and other effects like Easter, trading day, etc. This book is intended for both students and researchers in various fields dealing with time series. The software provides numerous automatic procedures to handle common practical situations, but at the same time, readers with programming skills can write their own programs to deal with specific problems. Although the theoretical introduction to each topic is kept to a minimum, readers can consult the companion book 'Multivariate Time Series With Linear State Space Structure', by the same author, if they require more details.
Testing for economic convergence across countries has been a central issue in the literature of economic growth and development. This book introduces a modern framework to study the cross-country convergence dynamics in labor productivity and its proximate sources: capital accumulation and aggregate efficiency. In particular, recent convergence dynamics of developed as well as developing countries are evaluated through the lens of a non-linear dynamic factor model and a clustering algorithm for panel data. This framework allows us to examine key economic phenomena such as technological heterogeneity and multiple equilibria. In this context, the book provides a succinct review of the recent club convergence literature, a comparative view of developed and developing countries, and a tutorial on how to implement the club convergence framework in the statistical software Stata.
The book deals with functions of many variables: differentiation and integration, extrema with a number of digressions to related subjects such as curves, surfaces and Morse theory. The background needed for understanding the examples and how to compute in Mathematica (R) will also be discussed.
This Springer brief addresses the challenges encountered in the study of the optimization of time-nonhomogeneous Markov chains. It develops new insights and new methodologies for systems in which concepts such as stationarity, ergodicity, periodicity and connectivity do not apply. This brief introduces the novel concept of confluencity and applies a relative optimization approach. It develops a comprehensive theory for optimization of the long-run average of time-nonhomogeneous Markov chains. The book shows that confluencity is the most fundamental concept in optimization, and that relative optimization is more suitable for treating the systems under consideration than standard ideas of dynamic programming. Using confluencity and relative optimization, the author classifies states as confluent or branching and shows how the under-selectivity issue of the long-run average can be easily addressed, multi-class optimization implemented, and Nth biases and Blackwell optimality conditions derived. These results are presented in a book for the first time and so may enhance the understanding of optimization and motivate new research ideas in the area. |
![]() ![]() You may like...
Jump into JMP Scripting, Second Edition…
Wendy Murphrey, Rosemary Lucas
Hardcover
R1,649
Discovery Miles 16 490
Spatial Regression Analysis Using…
Daniel A. Griffith, Yongwan Chun, …
Paperback
R3,120
Discovery Miles 31 200
SAS for Mixed Models - Introduction and…
Walter W. Stroup, George A. Milliken, …
Hardcover
R3,218
Discovery Miles 32 180
Portfolio and Investment Analysis with…
John B. Guerard, Ziwei Wang, …
Hardcover
R2,423
Discovery Miles 24 230
An Introduction to Creating Standardized…
Todd Case, Yuting Tian
Hardcover
R1,643
Discovery Miles 16 430
SAS Text Analytics for Business…
Teresa Jade, Biljana Belamaric-Wilsey, …
Hardcover
R2,704
Discovery Miles 27 040
|