![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
This book is dedicated to the systematization and development of models, methods, and algorithms for queuing systems with correlated arrivals. After first setting up the basic tools needed for the study of queuing theory, the authors concentrate on complicated systems: multi-server systems with phase type distribution of service time or single-server queues with arbitrary distribution of service time or semi-Markovian service. They pay special attention to practically important retrial queues, tandem queues, and queues with unreliable servers. Mathematical models of networks and queuing systems are widely used for the study and optimization of various technical, physical, economic, industrial, and administrative systems, and this book will be valuable for researchers, graduate students, and practitioners in these domains.
This open access book presents a set of basic techniques for estimating the benefit of IT development projects and portfolios. It also offers methods for monitoring how much of that estimated benefit is being achieved during projects. Readers can then use these benefit estimates together with cost estimates to create a benefit/cost index to help them decide which functionalities to send into construction and in what order. This allows them to focus on constructing the functionality that offers the best value for money at an early stage. Although benefits management involves a wide range of activities in addition to estimation and monitoring, the techniques in this book provides a clear guide to achieving what has always been the goal of project and portfolio stakeholders: developing systems that produce as much usefulness and value as possible for the money invested. The techniques can also help deal with vicarious motives and obstacles that prevent this happening. The book equips readers to recognize when a project budget should not be spent in full and resources be allocated elsewhere in a portfolio instead. It also provides development managers and upper management with common ground as a basis for making informed decisions.
This book collects peer-reviewed contributions on modern statistical methods and topics, stemming from the third workshop on Analytical Methods in Statistics, AMISTAT 2019, held in Liberec, Czech Republic, on September 16-19, 2019. Real-life problems demand statistical solutions, which in turn require new and profound mathematical methods. As such, the book is not only a collection of solved problems but also a source of new methods and their practical extensions. The authoritative contributions focus on analytical methods in statistics, asymptotics, estimation and Fisher information, robustness, stochastic models and inequalities, and other related fields; further, they address e.g. average autoregression quantiles, neural networks, weighted empirical minimum distance estimators, implied volatility surface estimation, the Grenander estimator, non-Gaussian component analysis, meta learning, and high-dimensional errors-in-variables models.
This book provides a general introduction to Sequential Monte Carlo (SMC) methods, also known as particle filters. These methods have become a staple for the sequential analysis of data in such diverse fields as signal processing, epidemiology, machine learning, population ecology, quantitative finance, and robotics. The coverage is comprehensive, ranging from the underlying theory to computational implementation, methodology, and diverse applications in various areas of science. This is achieved by describing SMC algorithms as particular cases of a general framework, which involves concepts such as Feynman-Kac distributions, and tools such as importance sampling and resampling. This general framework is used consistently throughout the book. Extensive coverage is provided on sequential learning (filtering, smoothing) of state-space (hidden Markov) models, as this remains an important application of SMC methods. More recent applications, such as parameter estimation of these models (through e.g. particle Markov chain Monte Carlo techniques) and the simulation of challenging probability distributions (in e.g. Bayesian inference or rare-event problems), are also discussed. The book may be used either as a graduate text on Sequential Monte Carlo methods and state-space modeling, or as a general reference work on the area. Each chapter includes a set of exercises for self-study, a comprehensive bibliography, and a "Python corner," which discusses the practical implementation of the methods covered. In addition, the book comes with an open source Python library, which implements all the algorithms described in the book, and contains all the programs that were used to perform the numerical experiments.
This book introduces the basic methodologies for successful data analytics. Matrix optimization and approximation are explained in detail and extensively applied to dimensionality reduction by principal component analysis and multidimensional scaling. Diffusion maps and spectral clustering are derived as powerful tools. The methodological overlap between data science and machine learning is emphasized by demonstrating how data science is used for classification as well as supervised and unsupervised learning.
The goal of this book is to gather in a single work the most relevant concepts related in optimization methods, showing how such theories and methods can be addressed using the open source, multi-platform R tool. Modern optimization methods, also known as metaheuristics, are particularly useful for solving complex problems for which no specialized optimization algorithm has been developed. These methods often yield high quality solutions with a more reasonable use of computational resources (e.g. memory and processing effort). Examples of popular modern methods discussed in this book are: simulated annealing; tabu search; genetic algorithms; differential evolution; and particle swarm optimization. This book is suitable for undergraduate and graduate students in computer science, information technology, and related areas, as well as data analysts interested in exploring modern optimization methods using R. This new edition integrates the latest R packages through text and code examples. It also discusses new topics, such as: the impact of artificial intelligence and business analytics in modern optimization tasks; the creation of interactive Web applications; usage of parallel computing; and more modern optimization algorithms (e.g., iterated racing, ant colony optimization, grammatical evolution).
This book presents the latest research on the statistical analysis of functional, high-dimensional and other complex data, addressing methodological and computational aspects, as well as real-world applications. It covers topics like classification, confidence bands, density estimation, depth, diagnostic tests, dimension reduction, estimation on manifolds, high- and infinite-dimensional statistics, inference on functional data, networks, operatorial statistics, prediction, regression, robustness, sequential learning, small-ball probability, smoothing, spatial data, testing, and topological object data analysis, and includes applications in automobile engineering, criminology, drawing recognition, economics, environmetrics, medicine, mobile phone data, spectrometrics and urban environments. The book gathers selected, refereed contributions presented at the Fifth International Workshop on Functional and Operatorial Statistics (IWFOS) in Brno, Czech Republic. The workshop was originally to be held on June 24-26, 2020, but had to be postponed as a consequence of the COVID-19 pandemic. Initiated by the Working Group on Functional and Operatorial Statistics at the University of Toulouse in 2008, the IWFOS workshops provide a forum to discuss the latest trends and advances in functional statistics and related fields, and foster the exchange of ideas and international collaboration in the field.
This book discusses the development of the Rosenbrock-Wanner methods from the origins of the idea to current research with the stable and efficient numerical solution and differential-algebraic systems of equations, still in focus. The reader gets a comprehensive insight into the classical methods as well as into the development and properties of novel W-methods, two-step and exponential Rosenbrock methods. In addition, descriptive applications from the fields of water and hydrogen network simulation and visual computing are presented.
This book introduces readers to various signal processing models that have been used in analyzing periodic data, and discusses the statistical and computational methods involved. Signal processing can broadly be considered to be the recovery of information from physical observations. The received signals are usually disturbed by thermal, electrical, atmospheric or intentional interferences, and due to their random nature, statistical techniques play an important role in their analysis. Statistics is also used in the formulation of appropriate models to describe the behavior of systems, the development of appropriate techniques for estimation of model parameters and the assessment of the model performances. Analyzing different real-world data sets to illustrate how different models can be used in practice, and highlighting open problems for future research, the book is a valuable resource for senior undergraduate and graduate students specializing in mathematics or statistics.
The contributions gathered in this book focus on modern methods for statistical learning and modeling in data analysis and present a series of engaging real-world applications. The book covers numerous research topics, ranging from statistical inference and modeling to clustering and factorial methods, from directional data analysis to time series analysis and small area estimation. The applications reflect new analyses in a variety of fields, including medicine, finance, engineering, marketing and cyber risk. The book gathers selected and peer-reviewed contributions presented at the 12th Scientific Meeting of the Classification and Data Analysis Group of the Italian Statistical Society (CLADAG 2019), held in Cassino, Italy, on September 11-13, 2019. CLADAG promotes advanced methodological research in multivariate statistics with a special focus on data analysis and classification, and supports the exchange and dissemination of ideas, methodological concepts, numerical methods, algorithms, and computational and applied results. This book, true to CLADAG's goals, is intended for researchers and practitioners who are interested in the latest developments and applications in the field of data analysis and classification.
This book presents the state of the art on numerical semigroups and related subjects, offering different perspectives on research in the field and including results and examples that are very difficult to find in a structured exposition elsewhere. The contents comprise the proceedings of the 2018 INdAM "International Meeting on Numerical Semigroups", held in Cortona, Italy. Talks at the meeting centered not only on traditional types of numerical semigroups, such as Arf or symmetric, and their usual properties, but also on related types of semigroups, such as affine, Puiseux, Weierstrass, and primary, and their applications in other branches of algebra, including semigroup rings, coding theory, star operations, and Hilbert functions. The papers in the book reflect the variety of the talks and derive from research areas including Semigroup Theory, Factorization Theory, Algebraic Geometry, Combinatorics, Commutative Algebra, Coding Theory, and Number Theory. The book is intended for researchers and students who want to learn about recent developments in the theory of numerical semigroups and its connections with other research fields.
All the Essentials to Start Using Adaptive Designs in No Time Compared to traditional clinical trial designs, adaptive designs often lead to increased success rates in drug development at reduced costs and time. Introductory Adaptive Trial Designs: A Practical Guide with R motivates newcomers to quickly and easily grasp the essence of adaptive designs as well as the foundations of adaptive design methods. The book reduces the mathematics to a minimum and makes the material as practical as possible. Instead of providing general, black-box commercial software packages, the author includes open-source R functions that enable readers to better understand the algorithms and customize the designs to meet their needs. Readers can run the simulations for all the examples and change the input parameters to see how each input parameter affects the simulation outcomes or design operating characteristics. Taking a learning-by-doing approach, this tutorial-style book guides readers on planning and executing various types of adaptive designs. It helps them develop the skills to begin using the designs immediately.
This book features selected papers presented at the 2nd International Conference on Advanced Computing Technologies and Applications, held at SVKM's Dwarkadas J. Sanghvi College of Engineering, Mumbai, India, from 28 to 29 February 2020. Covering recent advances in next-generation computing, the book focuses on recent developments in intelligent computing, such as linguistic computing, statistical computing, data computing and ambient applications.
Master the syntax for working with R's plotting functions in graphics and stats in this easy reference to formatting plots. The approach in Visualizing Data in R 4 toward the application of formatting in ggplot() will follow the structure of the formatting used by the plotting functions in graphics and stats. This book will take advantage of the new features added to R 4 where appropriate including a refreshed color palette for charts, Cairo graphics with more fonts/symbols, and improved performance from grid graphics including ggplot 2 rendering speed. Visualizing Data in R 4 starts with an introduction and then is split into two parts and six appendices. Part I covers the function plot() and the ancillary functions you can use with plot(). You'll also see the functions par() and layout(), providing for multiple plots on a page. Part II goes over the basics of using the functions qplot() and ggplot() in the package ggplot2. The default plots generated by the functions qplot() and ggplot() give more sophisticated-looking plots than the default plots done by plot() and are easier to use, but the function plot() is more flexible. Both plot() and ggplot() allow for many layers to a plot. The six appendices will cover plots for contingency tables, plots for continuous variables, plots for data with a limited number of values, functions that generate multiple plots, plots for time series analysis, and some miscellaneous plots. Some of the functions that will be in the appendices include functions that generate histograms, bar charts, pie charts, box plots, and heatmaps. What You Will Learn Use R to create informative graphics Master plot(), qplot(), and ggplot() Discover the canned graphics functions in stats and graphics Format plots generated by plot() and ggplot() Who This Book Is For Those in data science who use R. Some prior experience with R or data science is recommended.
Fulfilling the need for a practical user's guide, Statistics in MATLAB: A Primer provides an accessible introduction to the latest version of MATLAB (R) and its extensive functionality for statistics. Assuming a basic knowledge of statistics and probability as well as a fundamental understanding of linear algebra concepts, this book: Covers capabilities in the main MATLAB package, the Statistics Toolbox, and the student version of MATLAB Presents examples of how MATLAB can be used to analyze data Offers access to a companion website with data sets and additional examples Contains figures and visual aids to assist in application of the software Explains how to determine what method should be used for analysis Statistics in MATLAB: A Primer is an ideal reference for undergraduate and graduate students in engineering, mathematics, statistics, economics, biostatistics, and computer science. It is also appropriate for a diverse professional market, making it a valuable addition to the libraries of researchers in statistics, computer science, data mining, machine learning, image analysis, signal processing, and engineering.
This book explores missing data techniques and provides a detailed and easy-to-read introduction to multiple imputation, covering the theoretical aspects of the topic and offering hands-on help with the implementation. It discusses the pros and cons of various techniques and concepts, including multiple imputation quality diagnostics, an important topic for practitioners. It also presents current research and new, practically relevant developments in the field, and demonstrates the use of recent multiple imputation techniques designed for situations where distributional assumptions of the classical multiple imputation solutions are violated. In addition, the book features numerous practical tutorials for widely used R software packages to generate multiple imputations (norm, pan and mice). The provided R code and data sets allow readers to reproduce all the examples and enhance their understanding of the procedures. This book is intended for social and health scientists and other quantitative researchers who analyze incompletely observed data sets, as well as master's and PhD students with a sound basic knowledge of statistics.
This book discusses enterprise hierarchies, which view a target system with varying degrees of abstraction. These requirement refinement hierarchies can be represented by goal models. It is important to verify that such hierarchies capture the same set of rationales and intentions and are in mutual agreement with the requirements of the system being designed. The book also explores how hierarchies manifest themselves in the real world by undertaking a data mining exercise and observing the interactions within an enterprise. The inherent sequence-agnostic property of goal models prevents requirement analysts from performing compliance checks in this phase as compliance rules are generally embedded with temporal information. The studies discussed here seek to extract finite state models corresponding to goal models with the help of model transformation. The i*ToNuSMV tool implements one such algorithm to perform model checking on i* models. In turn, the AFSR framework provides a new goal model nomenclature that associates semantics with individual goals. It also provides a reconciliation machinery that detects entailment or consistency conflicts within goal models and suggests corrective measures to resolve such conflicts. The authors also discuss how the goal maintenance problem can be mapped to the state-space search problem, and how A* search can be used to identify an optimal goal model configuration that is free from all conflicts. In conclusion, the authors discuss how the proposed research frameworks can be extended and applied in new research directions. The GRL2APK framework presents an initiative to develop mobile applications from goal models using reusable code component repositories.
This is the first textbook that allows readers who may be unfamiliar with matrices to understand a variety of multivariate analysis procedures in matrix forms. By explaining which models underlie particular procedures and what objective function is optimized to fit the model to the data, it enables readers to rapidly comprehend multivariate data analysis. Arranged so that readers can intuitively grasp the purposes for which multivariate analysis procedures are used, the book also offers clear explanations of those purposes, with numerical examples preceding the mathematical descriptions. Supporting the modern matrix formulations by highlighting singular value decomposition among theorems in matrix algebra, this book is useful for undergraduate students who have already learned introductory statistics, as well as for graduate students and researchers who are not familiar with matrix-intensive formulations of multivariate data analysis. The book begins by explaining fundamental matrix operations and the matrix expressions of elementary statistics. Then, it offers an introduction to popular multivariate procedures, with each chapter featuring increasing advanced levels of matrix algebra. Further the book includes in six chapters on advanced procedures, covering advanced matrix operations and recently proposed multivariate procedures, such as sparse estimation, together with a clear explication of the differences between principal components and factor analyses solutions. In a nutshell, this book allows readers to gain an understanding of the latest developments in multivariate data science.
This book presents the best papers from the 1st International Conference on Mathematical Research for Blockchain Economy (MARBLE) 2019, held in Santorini, Greece. While most blockchain conferences and forums are dedicated to business applications, product development or Initial Coin Offering (ICO) launches, this conference focused on the mathematics behind blockchain to bridge the gap between practice and theory. Every year, thousands of blockchain projects are launched and circulated in the market, and there is a tremendous wealth of blockchain applications, from finance to healthcare, education, media, logistics and more. However, due to theoretical and technical barriers, most of these applications are impractical for use in a real-world business context. The papers in this book reveal the challenges and limitations, such as scalability, latency, privacy and security, and showcase solutions and developments to overcome them.
This book covers applications of R to the general discipline of radiation dosimetry and to the specific areas of luminescence dosimetry, luminescence dating, and radiation protection dosimetry. It features more than 90 detailed worked examples of R code fully integrated into the text, with extensive annotations. The book shows how researchers can use available R packages to analyze their experimental data, and how to extract the various parameters describing mathematically the luminescence signals. In each chapter, the theory behind the subject is summarized, and references are given from the literature, so that researchers can look up the details of the theory and the relevant experiments. Several chapters are dedicated to Monte Carlo methods, which are used to simulate the luminescence processes during the irradiation, heating, and optical stimulation of solids, for a wide variety of materials. This book will be useful to those who use the tools of luminescence dosimetry, including physicists, geologists, archaeologists, and for all researchers who use radiation in their research.
This book is a short, focused introduction to Mathematica, the comprehensive software system for doing mathematics. Written for the novice, this engaging book contains an explanation of essential Mathematica commands, as well as the rich Mathematica interface for preparing polished technical documents. Mathematica can be used to graph functions, solve equations, perform statistics tests, and much more. In addition, it incorporates word processing and desktop publishing features for combining mathematical computations with text and graphics, and producing polished, integrated, interactive documents. You can even use it to create documents and graphics for the Web. This book explains everything you need to know to begin using Mathematica to do all these things and more. Written for Mathematica version 3, this book can also be used with earlier versions of the software. Intermediate and advanced users may even find useful information here, especially if they are making the switch to version 3 from an earlier version.
This volume presents the latest advances in statistics and data science, including theoretical, methodological and computational developments and practical applications related to classification and clustering, data gathering, exploratory and multivariate data analysis, statistical modeling, and knowledge discovery and seeking. It includes contributions on analyzing and interpreting large, complex and aggregated datasets, and highlights numerous applications in economics, finance, computer science, political science and education. It gathers a selection of peer-reviewed contributions presented at the 16th Conference of the International Federation of Classification Societies (IFCS 2019), which was organized by the Greek Society of Data Analysis and held in Thessaloniki, Greece, on August 26-29, 2019.
Want to use the power of R sooner rather than later? Don't have time to plow through wordy texts and online manuals? Use this book for quick, simple code to get your projects up and running. It includes code and examples applicable to many disciplines. Written in everyday language with a minimum of complexity, each chapter provides the building blocks you need to fit R's astounding capabilities to your analytics, reporting, and visualization needs. CRAN Recipes recognizes how needless jargon and complexity get in your way. Busy professionals need simple examples and intuitive descriptions; side trips and meandering philosophical discussions are left for other books. Here R scripts are condensed, to the extent possible, to copy-paste-run format. Chapters and examples are structured to purpose rather than particular functions (e.g., "dirty data cleanup" rather than the R package name "janitor"). Everyday language eliminates the need to know functions/packages in advance. What You Will Learn Carry out input/output; visualizations; data munging; manipulations at the group level; and quick data exploration Handle forecasting (multivariate, time series, logistic regression, Facebook's Prophet, and others) Use text analytics; sampling; financial analysis; and advanced pattern matching (regex) Manipulate data using DPLYR: filter, sort, summarize, add new fields to datasets, and apply powerful IF functions Create combinations or subsets of files using joins Write efficient code using pipes to eliminate intermediate steps (MAGRITTR) Work with string/character manipulation of all types (STRINGR) Discover counts, patterns, and how to locate whole words Do wild-card matching, extraction, and invert-match Work with dates using LUBRIDATE Fix dirty data; attractive formatting; bad habits to avoid Who This Book Is For Programmers/data scientists with at least some prior exposure to R.
Now in its second edition, Text Analysis with R provides a practical introduction to computational text analysis using the open source programming language R. R is an extremely popular programming language, used throughout the sciences; due to its accessibility, R is now used increasingly in other research areas. In this volume, readers immediately begin working with text, and each chapter examines a new technique or process, allowing readers to obtain a broad exposure to core R procedures and a fundamental understanding of the possibilities of computational text analysis at both the micro and the macro scale. Each chapter builds on its predecessor as readers move from small scale "microanalysis" of single texts to large scale "macroanalysis" of text corpora, and each concludes with a set of practice exercises that reinforce and expand upon the chapter lessons. The book's focus is on making the technical palatable and making the technical useful and immediately gratifying. Text Analysis with R is written with students and scholars of literature in mind but will be applicable to other humanists and social scientists wishing to extend their methodological toolkit to include quantitative and computational approaches to the study of text. Computation provides access to information in text that readers simply cannot gather using traditional qualitative methods of close reading and human synthesis. This new edition features two new chapters: one that introduces dplyr and tidyr in the context of parsing and analyzing dramatic texts to extract speaker and receiver data, and one on sentiment analysis using the syuzhet package. It is also filled with updated material in every chapter to integrate new developments in the field, current practices in R style, and the use of more efficient algorithms.
The peer-reviewed contributions gathered in this book address methods, software and applications of statistics and data science in the social sciences. The data revolution in social science research has not only produced new business models, but has also provided policymakers with better decision-making support tools. In this volume, statisticians, computer scientists and experts on social research discuss the opportunities and challenges of the social data revolution in order to pave the way for addressing new research problems. The respective contributions focus on complex social systems and current methodological advances in extracting social knowledge from large data sets, as well as modern social research on human behavior and society using large data sets. Moreover, they analyze integrated systems designed to take advantage of new social data sources, and discuss quality-related issues. The papers were originally presented at the 2nd International Conference on Data Science and Social Research, held in Milan, Italy, on February 4-5, 2019. |
You may like...
Web Technologies & Applications
Sammulal Porika, M Peddi Kishore
Hardcover
Web Services - Concepts, Methodologies…
Information Reso Management Association
Hardcover
R8,957
Discovery Miles 89 570
Household Chemicals and Emergency First…
Jack L. Weddell, Rosemary S. J. Happell, …
Paperback
R2,032
Discovery Miles 20 320
Prisoner 913 - The Release Of Nelson…
Riaan de Villiers, Jan-Ad Stemmet
Paperback
R542
Discovery Miles 5 420
|