![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
This volume collects latest methodological and applied contributions on functional, high-dimensional and other complex data, related statistical models and tools as well as on operator-based statistics. It contains selected and refereed contributions presented at the Fourth International Workshop on Functional and Operatorial Statistics (IWFOS 2017) held in A Coruna, Spain, from 15 to 17 June 2017. The series of IWFOS workshops was initiated by the Working Group on Functional and Operatorial Statistics at the University of Toulouse in 2008. Since then, many of the major advances in functional statistics and related fields have been periodically presented and discussed at the IWFOS workshops.
The book provides a description of the process of health economic evaluation and modelling for cost-effectiveness analysis, particularly from the perspective of a Bayesian statistical approach. Some relevant theory and introductory concepts are presented using practical examples and two running case studies. The book also describes in detail how to perform health economic evaluations using the R package BCEA (Bayesian Cost-Effectiveness Analysis). BCEA can be used to post-process the results of a Bayesian cost-effectiveness model and perform advanced analyses producing standardised and highly customisable outputs. It presents all the features of the package, including its many functions and their practical application, as well as its user-friendly web interface. The book is a valuable resource for statisticians and practitioners working in the field of health economics wanting to simplify and standardise their workflow, for example in the preparation of dossiers in support of marketing authorisation, or academic and scientific publications.
This book offers a concise and gentle introduction to finite element programming in Python based on the popular FEniCS software library. Using a series of examples, including the Poisson equation, the equations of linear elasticity, the incompressible Navier-Stokes equations, and systems of nonlinear advection-diffusion-reaction equations, it guides readers through the essential steps to quickly solving a PDE in FEniCS, such as how to define a finite variational problem, how to set boundary conditions, how to solve linear and nonlinear systems, and how to visualize solutions and structure finite element Python programs. This book is open access under a CC BY license.
The purpose of this handbook is to allow users to learn and master the mathematics software package MATLAB (R), as well as to serve as a quick reference to some of the most used instructions in the package. A unique feature of this handbook is that it can be used by the novice and by experienced users alike. For experienced users, it has four chapters with examples and applications in engineering, finance, physics, and optimization. Exercises are included, along with solutions available for the interested reader on the book's web page. These exercises are a complement for the interested reader who wishes to get a deeper understanding of MATLAB. Features Covers both MATLAB and introduction to Simulink Covers the use of GUIs in MATLAB and Simulink Offers downloadable examples and programs from the handbook's website Provides an introduction to object oriented programming using MATLAB Includes applications from many areas Includes the realization of executable files for MATLAB programs and Simulink models
This book constitutes the refereed proceedings of the 19th International Conference on Distributed and Computer and Communication Networks, DCCN 2016, held in Moscow, Russia, in November 2016. The 50 revised full papers and the 6 revised short papers presented were carefully reviewed and selected from 141 submissions. The papers cover the following topics: computer and communication networks architecture optimization; control in computer and communication networks; performance and QoS/QoE evaluation in wireless networks; analytical modeling and simulation of next-generation communications systems; queuing theory and reliability theory applications in computer networks; wireless 4G/5G networks, cm- and mm-wave radio technologies; RFID technology and its application in intellectual transportation networks; internet of things, wearables, and applications of distributed information systems; probabilistic and statistical models in information systems; mathematical modeling of high-tech systems; mathematical modeling and control problems; distributed and cloud computing systems, big data analytics.
The R Companion to Elementary Applied Statistics includes traditional applications covered in elementary statistics courses as well as some additional methods that address questions that might arise during or after the application of commonly used methods. Beginning with basic tasks and computations with R, readers are then guided through ways to bring data into R, manipulate the data as needed, perform common statistical computations and elementary exploratory data analysis tasks, prepare customized graphics, and take advantage of R for a wide range of methods that find use in many elementary applications of statistics. Features: Requires no familiarity with R or programming to begin using this book. Can be used as a resource for a project-based elementary applied statistics course, or for researchers and professionals who wish to delve more deeply into R. Contains an extensive array of examples that illustrate ideas on various ways to use pre-packaged routines, as well as on developing individualized code. Presents quite a few methods that may be considered non-traditional, or advanced. Includes accompanying carefully documented script files that contain code for all examples presented, and more. R is a powerful and free product that is gaining popularity across the scientific community in both the professional and academic arenas. Statistical methods discussed in this book are used to introduce the fundamentals of using R functions and provide ideas for developing further skills in writing R code. These ideas are illustrated through an extensive collection of examples. About the Author: Christopher Hay-Jahans received his Doctor of Arts in mathematics from Idaho State University in 1999. After spending three years at University of South Dakota, he moved to Juneau, Alaska, in 2002 where he has taught a wide range of undergraduate courses at University of Alaska Southeast.
This introductory textbook for business statistics teaches statistical analysis and research methods via business case studies and financial data using Excel, Minitab, and SAS. Every chapter in this textbook engages the reader with data of individual stock, stock indices, options, and futures. One studies and uses statistics to learn how to study, analyze, and understand a data set of particular interest. Some of the more popular statistical programs that have been developed to use statistical and computational methods to analyze data sets are SAS, SPSS, and Minitab. Of those, we look at Minitab and SAS in this textbook. One of the main reasons to use Minitab is that it is the easiest to use among the popular statistical programs. We look at SAS because it is the leading statistical package used in industry. We also utilize the much less costly and ubiquitous Microsoft Excel to do statistical analysis, as the benefits of Excel have become widely recognized in the academic world and its analytical capabilities extend to about 90 percent of statistical analysis done in the business world. We demonstrate much of our statistical analysis using Excel and double check the analysis and outcomes using Minitab and SAS-also helpful in some analytical methods not possible or practical to do in Excel.
Intuitive Probability and Random Processes using MATLAB (R) is an introduction to probability and random processes that merges theory with practice. Based on the author's belief that only "hands-on" experience with the material can promote intuitive understanding, the approach is to motivate the need for theory using MATLAB examples, followed by theory and analysis, and finally descriptions of "real-world" examples to acquaint the reader with a wide variety of applications. The latter is intended to answer the usual question "Why do we have to study this?" Other salient features are: *heavy reliance on computer simulation for illustration and student exercises *the incorporation of MATLAB programs and code segments *discussion of discrete random variables followed by continuous random variables to minimize confusion *summary sections at the beginning of each chapter *in-line equation explanations *warnings on common errors and pitfalls *over 750 problems designed to help the reader assimilate and extend the concepts Intuitive Probability and Random Processes using MATLAB (R) is intended for undergraduate and first-year graduate students in engineering. The practicing engineer as well as others having the appropriate mathematical background will also benefit from this book. About the Author Steven M. Kay is a Professor of Electrical Engineering at the University of Rhode Island and a leading expert in signal processing. He has received the Education Award "for outstanding contributions in education and in writing scholarly books and texts..." from the IEEE Signal Processing society and has been listed as among the 250 most cited researchers in the world in engineering.
This book discusses the problem of model choice when the statistical models are separate, also called nonnested. Chapter 1 provides an introduction, motivating examples and a general overview of the problem. Chapter 2 presents the classical or frequentist approach to the problem as well as several alternative procedures and their properties. Chapter 3 explores the Bayesian approach, the limitations of the classical Bayes factors and the proposed alternative Bayes factors to overcome these limitations. It also discusses a significance Bayesian procedure. Lastly, Chapter 4 examines the pure likelihood approach. Various real-data examples and computer simulations are provided throughout the text.
A unique point of this book is its low threshold, textually simple and at the same time full of self-assessment opportunities. Other unique points are the succinctness of the chapters with 3 to 6 pages, the presence of entire-commands-texts of the statistical methodologies reviewed and the fact that dull scientific texts imposing an unnecessary burden on busy and jaded professionals have been left out. For readers requesting more background, theoretical and mathematical information a note section with references is in each chapter. The first edition in 2010 was the first publication of a complete overview of SPSS methodologies for medical and health statistics. Well over 100,000 copies of various chapters were sold within the first year of publication. Reasons for a rewrite were four. First, many important comments from readers urged for a rewrite. Second, SPSS has produced many updates and upgrades, with relevant novel and improved methodologies. Third, the authors felt that the chapter texts needed some improvements for better readability: chapters have now been classified according the outcome data helpful for choosing your analysis rapidly, a schematic overview of data, and explanatory graphs have been added. Fourth, current data are increasingly complex and many important methods for analysis were missing in the first edition. For that latter purpose some more advanced methods seemed unavoidable, like hierarchical loglinear methods, gamma and Tweedie regressions and random intercept analyses. In order for the contents of the book to remain covered by the title, the authors renamed the book: SPSS for Starters and 2nd Levelers. Special care was, nonetheless, taken to keep things as simple as possible, simple menu commands are given. The arithmetic is still of a no-more-than high-school level. Step-by-step analyses of different statistical methodologies are given with the help of 60 SPSS data files available through the internet. Because of the lack of time of this busy group of people, the authors have given every effort to produce a text as succinct as possible.
The objective of Kai Zhang and his research is to assess the existing process monitoring and fault detection (PM-FD) methods. His aim is to provide suggestions and guidance for choosing appropriate PM-FD methods, because the performance assessment study for PM-FD methods has become an area of interest in both academics and industry. The author first compares basic FD statistics, and then assesses different PM-FD methods to monitor the key performance indicators of static processes, steady-state dynamic processes and general dynamic processes including transient states. He validates the theoretical developments using both benchmark and real industrial processes.
This book provides new insights on the study of global environmental changes using the ecoinformatics tools and the adaptive-evolutionary technology of geoinformation monitoring. The main advantage of this book is that it gathers and presents extensive interdisciplinary expertise in the parameterization of global biogeochemical cycles and other environmental processes in the context of globalization and sustainable development. In this regard, the crucial global problems concerning the dynamics of the nature-society system are considered and the key problems of ensuring the system's sustainable development are studied. A new approach to the numerical modeling of the nature-society system is proposed and results are provided on modeling the dynamics of the system's characteristics with regard to scenarios of anthropogenic impacts on biogeochemical cycles, land ecosystems and oceans. The main purpose of this book is to develop a universal guide to information-modeling technologies for assessing the function of environmental subsystems under various climatic and anthropogenic conditions.
Contingency tables arise in diverse fields, including life sciences, education, social and political sciences, notably market research and opinion surveys. Their analysis plays an essential role in gaining insight into structures of the quantities under consideration and in supporting decision making. Combining both theory and applications, this book presents models and methods for the analysis of two- and multidimensional-contingency tables. An excellent reference for advanced undergraduates, graduate students, and practitioners in statistics as well as biosciences, social sciences, education, and economics, the work may also be used as a textbook for a course on categorical data analysis. Prerequisites include basic background on statistical inference and knowledge of statistical software packages.
This book introduces advanced undergraduate, graduate students and practitioners to statistical methods for ranking data. An important aspect of nonparametric statistics is oriented towards the use of ranking data. Rank correlation is defined through the notion of distance functions and the notion of compatibility is introduced to deal with incomplete data. Ranking data are also modeled using a variety of modern tools such as CART, MCMC, EM algorithm and factor analysis. This book deals with statistical methods used for analyzing such data and provides a novel and unifying approach for hypotheses testing. The techniques described in the book are illustrated with examples and the statistical software is provided on the authors' website.
Matrix algorithms are at the core of scientific computing and are indispensable tools in most applications in engineering. This book offers a comprehensive and up-to-date treatment of modern methods in matrix computation. It uses a unified approach to direct and iterative methods for linear systems, least squares and eigenvalue problems. A thorough analysis of the stability, accuracy, and complexity of the treated methods is given. Numerical Methods in Matrix Computations is suitable for use in courses on scientific computing and applied technical areas at advanced undergraduate and graduate level. A large bibliography is provided, which includes both historical and review papers as well as recent research papers. This makes the book useful also as a reference and guide to further study and research work.
This book primarily addresses the optimality aspects of covariate designs. A covariate model is a combination of ANOVA and regression models. Optimal estimation of the parameters of the model using a suitable choice of designs is of great importance; as such choices allow experimenters to extract maximum information for the unknown model parameters. The main emphasis of this monograph is to start with an assumed covariate model in combination with some standard ANOVA set-ups such as CRD, RBD, BIBD, GDD, BTIBD, BPEBD, cross-over, multi-factor, split-plot and strip-plot designs, treatment control designs, etc. and discuss the nature and availability of optimal covariate designs. In some situations, optimal estimations of both ANOVA and the regression parameters are provided. Global optimality and D-optimality criteria are mainly used in selecting the design. The standard optimality results of both discrete and continuous set-ups have been adapted, and several novel combinatorial techniques have been applied for the construction of optimum designs using Hadamard matrices, the Kronecker product, Rao-Khatri product, mixed orthogonal arrays to name a few.
Describing novel mathematical concepts for recommendation engines, Realtime Data Mining: Self-Learning Techniques for Recommendation Engines features a sound mathematical framework unifying approaches based on control and learning theories, tensor factorization, and hierarchical methods. Furthermore, it presents promising results of numerous experiments on real-world data. The area of realtime data mining is currently developing at an exceptionally dynamic pace, and realtime data mining systems are the counterpart of today's "classic" data mining systems. Whereas the latter learn from historical data and then use it to deduce necessary actions, realtime analytics systems learn and act continuously and autonomously. In the vanguard of these new analytics systems are recommendation engines. They are principally found on the Internet, where all information is available in realtime and an immediate feedback is guaranteed. This monograph appeals to computer scientists and specialists in machine learning, especially from the area of recommender systems, because it conveys a new way of realtime thinking by considering recommendation tasks as control-theoretic problems. Realtime Data Mining: Self-Learning Techniques for Recommendation Engines will also interest application-oriented mathematicians because it consistently combines some of the most promising mathematical areas, namely control theory, multilevel approximation, and tensor factorization.
This volume compiles the major results of conference participants from the "Third International Conference in Network Analysis" held at the Higher School of Economics, Nizhny Novgorod in May 2013, with the aim to initiate further joint research among different groups. The contributions in this book cover a broad range of topics relevant to the theory and practice of network analysis, including the reliability of complex networks, software, theory, methodology, and applications. Network analysis has become a major research topic over the last several years. The broad range of applications that can be described and analyzed by means of a network has brought together researchers, practitioners from numerous fields such as operations research, computer science, transportation, energy, biomedicine, computational neuroscience and social sciences. In addition, new approaches and computer environments such as parallel computing, grid computing, cloud computing, and quantum computing have helped to solve large scale network optimization problems.
This pioneering book teaches readers to use R within four core analytical areas applicable to the Humanities: networks, text, geospatial data, and images. This book is also designed to be a bridge: between quantitative and qualitative methods, individual and collaborative work, and the humanities and social sciences. Humanities Data with R does not presuppose background programming experience. Early chapters take readers from R set-up to exploratory data analysis (continuous and categorical data, multivariate analysis, and advanced graphics with emphasis on aesthetics and facility). Following this, networks, geospatial data, image data, natural language processing and text analysis each have a dedicated chapter. Each chapter is grounded in examples to move readers beyond the intimidation of adding new tools to their research. Everything is hands-on: networks are explained using U.S. Supreme Court opinions, and low-level NLP methods are applied to short stories by Sir Arthur Conan Doyle. After working through these examples with the provided data, code and book website, readers are prepared to apply new methods to their own work. The open source R programming language, with its myriad packages and popularity within the sciences and social sciences, is particularly well-suited to working with humanities data. R packages are also highlighted in an appendix. This book uses an expanded conception of the forms data may take and the information it represents. The methodology will have wide application in classrooms and self-study for the humanities, but also for use in linguistics, anthropology, and political science. Outside the classroom, this intersection of humanities and computing is particularly relevant for research and new modes of dissemination across archives, museums and libraries.
Visualizing the data is an essential part of any data analysis. Modern computing developments have led to big improvements in graphic capabilities and there are many new possibilities for data displays. This book gives an overview of modern data visualization methods, both in theory and practice. It details modern graphical tools such as mosaic plots, parallel coordinate plots, and linked views. Coverage also examines graphical methodology for particular areas of statistics, for example Bayesian analysis, genomic data and cluster analysis, as well software for graphics.
The Model-Free Prediction Principle expounded upon in this monograph is based on the simple notion of transforming a complex dataset to one that is easier to work with, e.g., i.i.d. or Gaussian. As such, it restores the emphasis on observable quantities, i.e., current and future data, as opposed to unobservable model parameters and estimates thereof, and yields optimal predictors in diverse settings such as regression and time series. Furthermore, the Model-Free Bootstrap takes us beyond point prediction in order to construct frequentist prediction intervals without resort to unrealistic assumptions such as normality. Prediction has been traditionally approached via a model-based paradigm, i.e., (a) fit a model to the data at hand, and (b) use the fitted model to extrapolate/predict future data. Due to both mathematical and computational constraints, 20th century statistical practice focused mostly on parametric models. Fortunately, with the advent of widely accessible powerful computing in the late 1970s, computer-intensive methods such as the bootstrap and cross-validation freed practitioners from the limitations of parametric models, and paved the way towards the `big data' era of the 21st century. Nonetheless, there is a further step one may take, i.e., going beyond even nonparametric models; this is where the Model-Free Prediction Principle is useful. Interestingly, being able to predict a response variable Y associated with a regressor variable X taking on any possible value seems to inadvertently also achieve the main goal of modeling, i.e., trying to describe how Y depends on X. Hence, as prediction can be treated as a by-product of model-fitting, key estimation problems can be addressed as a by-product of being able to perform prediction. In other words, a practitioner can use Model-Free Prediction ideas in order to additionally obtain point estimates and confidence intervals for relevant parameters leading to an alternative, transformation-based approach to statistical inference.
This textbook on statistical modeling and statistical inference will assist advanced undergraduate and graduate students. Statistical Modeling and Computation provides a unique introduction to modern Statistics from both classical and Bayesian perspectives. It also offers an integrated treatment of Mathematical Statistics and modern statistical computation, emphasizing statistical modeling, computational techniques, and applications. Each of the three parts will cover topics essential to university courses. Part I covers the fundamentals of probability theory. In Part II, the authors introduce a wide variety of classical models that include, among others, linear regression and ANOVA models. In Part III, the authors address the statistical analysis and computation of various advanced models, such as generalized linear, state-space and Gaussian models. Particular attention is paid to fast Monte Carlo techniques for Bayesian inference on these models. Throughout the book the authors include a large number of illustrative examples and solved problems. The book also features a section with solutions, an appendix that serves as a MATLAB primer, and a mathematical supplement.
This book highlights recent advances in natural computing, including biology and its theory, bio-inspired computing, computational aesthetics, computational models and theories, computing with natural media, philosophy of natural computing and educational technology. It presents extended versions of the best papers selected from the symposium "7th International Workshop on Natural Computing" (IWNC7), held in Tokyo, Japan, in 2013. The target audience is not limited to researchers working in natural computing but also those active in biological engineering, fine/media art design, aesthetics and philosophy.
Economists can use computer algebra systems to manipulate symbolic models, derive numerical computations, and analyze empirical relationships among variables. Maxima is an open-source multi-platform computer algebra system that rivals proprietary software. Maxima's symbolic and computational capabilities enable economists and financial analysts to develop a deeper understanding of models by allowing them to explore the implications of differences in parameter values, providing numerical solutions to problems that would be otherwise intractable, and by providing graphical representations that can guide analysis. This book provides a step-by-step tutorial for using this program to examine the economic relationships that form the core of microeconomics in a way that complements traditional modeling techniques. Readers learn how to phrase the relevant analysis and how symbolic expressions, numerical computations, and graphical representations can be used to learn from microeconomic models. In particular, comparative statics analysis is facilitated. Little has been published on Maxima and its applications in economics and finance, and this volume will appeal to advanced undergraduates, graduate-level students studying microeconomics, academic researchers in economics and finance, economists, and financial analysts.
The book opens with a short introduction to Indian music, in particular classical Hindustani music, followed by a chapter on the role of statistics in computational musicology. The authors then show how to analyze musical structure using Rubato, the music software package for statistical analysis, in particular addressing modeling, melodic similarity and lengths, and entropy analysis; they then show how to analyze musical performance. Finally, they explain how the concept of seminatural composition can help a music composer to obtain the opening line of a raga-based song using Monte Carlo simulation. The book will be of interest to musicians and musicologists, particularly those engaged with Indian music. |
![]() ![]() You may like...
The Theory of Queuing Systems with…
Alexander N. Dudin, Valentina I. Klimenok, …
Hardcover
R2,932
Discovery Miles 29 320
SAS for Mixed Models - Introduction and…
Walter W. Stroup, George A. Milliken, …
Hardcover
R3,147
Discovery Miles 31 470
Mathematical Modeling for Smart…
Debabrata Samanta, Debabrata Singh
Hardcover
R12,404
Discovery Miles 124 040
SAS Text Analytics for Business…
Teresa Jade, Biljana Belamaric-Wilsey, …
Hardcover
R2,644
Discovery Miles 26 440
|