![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
This book brings together selected peer-reviewed contributions from various research fields in statistics, and highlights the diverse approaches and analyses related to real-life phenomena. Major topics covered in this volume include, but are not limited to, bayesian inference, likelihood approach, pseudo-likelihoods, regression, time series, and data analysis as well as applications in the life and social sciences. The software packages used in the papers are made available by the authors. This book is a result of the 47th Scientific Meeting of the Italian Statistical Society, held at the University of Cagliari, Italy, in 2014.
This book presents a detailed description of the development of statistical theory. In the mid twentieth century, the development of mathematical statistics underwent an enduring change, due to the advent of more refined mathematical tools. New concepts like sufficiency, superefficiency, adaptivity etc. motivated scholars to reflect upon the interpretation of mathematical concepts in terms of their real-world relevance. Questions concerning the optimality of estimators, for instance, had remained unanswered for decades, because a meaningful concept of optimality (based on the regularity of the estimators, the representation of their limit distribution and assertions about their concentration by means of Anderson's Theorem) was not yet available. The rapidly developing asymptotic theory provided approximate answers to questions for which non-asymptotic theory had found no satisfying solutions. In four engaging essays, this book presents a detailed description of how the use of mathematical methods stimulated the development of a statistical theory. Primarily focused on methodology, questionable proofs and neglected questions of priority, the book offers an intriguing resource for researchers in theoretical statistics, and can also serve as a textbook for advanced courses in statisticc.
The subject of this book stands at the crossroads of ergodic theory and measurable dynamics. With an emphasis on irreversible systems, the text presents a framework of multi-resolutions tailored for the study of endomorphisms, beginning with a systematic look at the latter. This entails a whole new set of tools, often quite different from those used for the "easier" and well-documented case of automorphisms. Among them is the construction of a family of positive operators (transfer operators), arising naturally as a dual picture to that of endomorphisms. The setting (close to one initiated by S. Karlin in the context of stochastic processes) is motivated by a number of recent applications, including wavelets, multi-resolution analyses, dissipative dynamical systems, and quantum theory. The automorphism-endomorphism relationship has parallels in operator theory, where the distinction is between unitary operators in Hilbert space and more general classes of operators such as contractions. There is also a non-commutative version: While the study of automorphisms of von Neumann algebras dates back to von Neumann, the systematic study of their endomorphisms is more recent; together with the results in the main text, the book includes a review of recent related research papers, some by the co-authors and their collaborators.
The aim of this textbook (previously titled SAS for Data Analytics) is to teach the use of SAS for statistical analysis of data for advanced undergraduate and graduate students in statistics, data science, and disciplines involving analyzing data. The book begins with an introduction beyond the basics of SAS, illustrated with non-trivial, real-world, worked examples. It proceeds to SAS programming and applications, SAS graphics, statistical analysis of regression models, analysis of variance models, analysis of variance with random and mixed effects models, and then takes the discussion beyond regression and analysis of variance to conclude. Pedagogically, the authors introduce theory and methodological basis topic by topic, present a problem as an application, followed by a SAS analysis of the data provided and a discussion of results. The text focuses on applied statistical problems and methods. Key features include: end of chapter exercises, downloadable SAS code and data sets, and advanced material suitable for a second course in applied statistics with every method explained using SAS analysis to illustrate a real-world problem. New to this edition: * Covers SAS v9.2 and incorporates new commands * Uses SAS ODS (output delivery system) for reproduction of tables and graphics output * Presents new commands needed to produce ODS output * All chapters rewritten for clarity * New and updated examples throughout * All SAS outputs are new and updated, including graphics * More exercises and problems * Completely new chapter on analysis of nonlinear and generalized linear models * Completely new appendix Mervyn G. Marasinghe, PhD, is Associate Professor Emeritus of Statistics at Iowa State University, where he has taught courses in statistical methods and statistical computing. Kenneth J. Koehler, PhD, is University Professor of Statistics at Iowa State University, where he teaches courses in statistical methodology at both graduate and undergraduate levels and primarily uses SAS to supplement his teaching.
This book systematically addresses the design and analysis of efficient techniques for independent random sampling. Both general-purpose approaches, which can be used to generate samples from arbitrary probability distributions, and tailored techniques, designed to efficiently address common real-world practical problems, are introduced and discussed in detail. In turn, the monograph presents fundamental results and methodologies in the field, elaborating and developing them into the latest techniques. The theory and methods are illustrated with a varied collection of examples, which are discussed in detail in the text and supplemented with ready-to-run computer code. The main problem addressed in the book is how to generate independent random samples from an arbitrary probability distribution with the weakest possible constraints or assumptions in a form suitable for practical implementation. The authors review the fundamental results and methods in the field, address the latest methods, and emphasize the links and interplay between ostensibly diverse techniques.
The quantity, diversity and availability of transport data is increasing rapidly, requiring new skills in the management and interrogation of data and databases. Recent years have seen a new wave of 'big data', 'Data Science', and 'smart cities' changing the world, with the Harvard Business Review describing Data Science as the "sexiest job of the 21st century". Transportation professionals and researchers need to be able to use data and databases in order to establish quantitative, empirical facts, and to validate and challenge their mathematical models, whose axioms have traditionally often been assumed rather than rigorously tested against data. This book takes a highly practical approach to learning about Data Science tools and their application to investigating transport issues. The focus is principally on practical, professional work with real data and tools, including business and ethical issues. "Transport modeling practice was developed in a data poor world, and many of our current techniques and skills are building on that sparsity. In a new data rich world, the required tools are different and the ethical questions around data and privacy are definitely different. I am not sure whether current professionals have these skills; and I am certainly not convinced that our current transport modeling tools will survive in a data rich environment. This is an exciting time to be a data scientist in the transport field. We are trying to get to grips with the opportunities that big data sources offer; but at the same time such data skills need to be fused with an understanding of transport, and of transport modeling. Those with these combined skills can be instrumental at providing better, faster, cheaper data for transport decision- making; and ultimately contribute to innovative, efficient, data driven modeling techniques of the future. It is not surprising that this course, this book, has been authored by the Institute for Transport Studies. To do this well, you need a blend of academic rigor and practical pragmatism. There are few educational or research establishments better equipped to do that than ITS Leeds". - Tom van Vuren, Divisional Director, Mott MacDonald "WSP is proud to be a thought leader in the world of transport modelling, planning and economics, and has a wide range of opportunities for people with skills in these areas. The evidence base and forecasts we deliver to effectively implement strategies and schemes are ever more data and technology focused a trend we have helped shape since the 1970's, but with particular disruption and opportunity in recent years. As a result of these trends, and to suitably skill the next generation of transport modellers, we asked the world-leading Institute for Transport Studies, to boost skills in these areas, and they have responded with a new MSc programme which you too can now study via this book." - Leighton Cardwell, Technical Director, WSP. "From processing and analysing large datasets, to automation of modelling tasks sometimes requiring different software packages to "talk" to each other, to data visualization, SYSTRA employs a range of techniques and tools to provide our clients with deeper insights and effective solutions. This book does an excellent job in giving you the skills to manage, interrogate and analyse databases, and develop powerful presentations. Another important publication from ITS Leeds." - Fitsum Teklu, Associate Director (Modelling & Appraisal) SYSTRA Ltd "Urban planning has relied for decades on statistical and computational practices that have little to do with mainstream data science. Information is still often used as evidence on the impact of new infrastructure even when it hardly contains any valid evidence. This book is an extremely welcome effort to provide young professionals with the skills needed to analyse how cities and transport networks actually work. The book is also highly relevant to anyone who will later want to build digital solutions to optimise urban travel based on emerging data sources". - Yaron Hollander, author of "Transport Modelling for a Complete Beginner"
This book provides practical applications of doubly classified models by using R syntax to generate the models. It also presents these models in symbolic tables so as to cater to those who are not mathematically inclined, while numerous examples throughout the book illustrate the concepts and their applications. For those who are not aware of this modeling approach, it serves as a good starting point to acquire a basic understanding of doubly classified models. It is also a valuable resource for academics, postgraduate students, undergraduates, data analysts and researchers who are interested in examining square contingency tables.
This book presents new findings on nonregular statistical estimation. Unlike other books on this topic, its major emphasis is on helping readers understand the meaning and implications of both regularity and irregularity through a certain family of distributions. In particular, it focuses on a truncated exponential family of distributions with a natural parameter and truncation parameter as a typical nonregular family. This focus includes the (truncated) Pareto distribution, which is widely used in various fields such as finance, physics, hydrology, geology, astronomy, and other disciplines. The family is essential in that it links both regular and nonregular distributions, as it becomes a regular exponential family if the truncation parameter is known. The emphasis is on presenting new results on the maximum likelihood estimation of a natural parameter or truncation parameter if one of them is a nuisance parameter. In order to obtain more information on the truncation, the Bayesian approach is also considered. Further, the application to some useful truncated distributions is discussed. The illustrated clarification of the nonregular structure provides researchers and practitioners with a solid basis for further research and applications.
Statistics with JMP: Hypothesis Tests, ANOVA and Regression Peter Goos, University of Leuven and University of Antwerp, Belgium David Meintrup, University of Applied Sciences Ingolstadt, Germany A first course on basic statistical methodology using JMP This book provides a first course on parameter estimation (point estimates and confidence interval estimates), hypothesis testing, ANOVA and simple linear regression. The authors approach combines mathematical depth with numerous examples and demonstrations using the JMP software. Key features: * Provides a comprehensive and rigorous presentation of introductory statistics that has been extensively classroom tested. * Pays attention to the usual parametric hypothesis tests as well as to non-parametric tests (including the calculation of exact p-values). * Discusses the power of various statistical tests, along with examples in JMP to enable in-sight into this difficult topic. * Promotes the use of graphs and confidence intervals in addition to p-values. * Course materials and tutorials for teaching are available on the book's companion website. Masters and advanced students in applied statistics, industrial engineering, business engineering, civil engineering and bio-science engineering will find this book beneficial. It also provides a useful resource for teachers of statistics particularly in the area of engineering.
This book and app is for practitioners, professionals, researchers, and students who want to learn how to make a plot within the R environment using ggplot2, step-by-step without coding. In widespread use in the statistical communities, R is a free software language and environment for statistical programming and graphics. Many users find R to have a steep learning curve but to be extremely useful once overcome. ggplot2 is an extremely popular package tailored for producing graphics within R but which requires coding and has a steep learning curve itself, and Shiny is an open source R package that provides a web framework for building web applications using R without requiring HTML, CSS, or JavaScript. This manual-"integrating" R, ggplot2, and Shiny-introduces a new Shiny app, Learn ggplot2, that allows users to make plots easily without coding. With the Learn ggplot2 Shiny app, users can make plots using ggplot2 without having to code each step, reducing typos and error messages and allowing users to become familiar with ggplot2 code. The app makes it easy to apply themes, make multiplots (combining several plots into one plot), and download plots as PNG, PDF, or PowerPoint files with editable vector graphics. Users can also make plots on any computer or smart phone. Learn ggplot2 Using Shiny App allows users to Make publication-ready plots in minutes without coding Download plots with desired width, height, and resolution Plot and download plots in png, pdf, and PowerPoint formats, with or without R code and with editable vector graphics
This proceedings volume contains eight selected papers that were presented in the International Symposium in Statistics (ISS) 2015 On Advances in Parametric and Semi-parametric Analysis of Multivariate, Time Series, Spatial-temporal, and Familial-longitudinal Data, held in St. John's, Canada from July 6 to 8, 2015. The main objective of the ISS-2015 was the discussion on advances and challenges in parametric and semi-parametric analysis for correlated data in both continuous and discrete setups. Thus, as a reflection of the theme of the symposium, the eight papers of this proceedings volume are presented in four parts. Part I is comprised of papers examining Elliptical t Distribution Theory. In Part II, the papers cover spatial and temporal data analysis. Part III is focused on longitudinal multinomial models in parametric and semi-parametric setups. Finally Part IV concludes with a paper on the inferences for longitudinal data subject to a challenge of important covariates selection from a set of large number of covariates available for the individuals in the study.
This book provides a friendly introduction to the paradigm and proposes a broad panorama of killing applications of the Infinity Computer in optimization: radically new numerical algorithms, great theoretical insights, efficient software implementations, and interesting practical case studies. This is the first book presenting to the readers interested in optimization the advantages of a recently introduced supercomputing paradigm that allows to numerically work with different infinities and infinitesimals on the Infinity Computer patented in several countries. One of the editors of the book is the creator of the Infinity Computer, and another editor was the first who has started to use it in optimization. Their results were awarded by numerous scientific prizes. This engaging book opens new horizons for researchers, engineers, professors, and students with interests in supercomputing paradigms, optimization, decision making, game theory, and foundations of mathematics and computer science. "Mathematicians have never been comfortable handling infinities... But an entirely new type of mathematics looks set to by-pass the problem... Today, Yaroslav Sergeyev, a mathematician at the University of Calabria in Italy solves this problem... " MIT Technology Review "These ideas and future hardware prototypes may be productive in all fields of science where infinite and infinitesimal numbers (derivatives, integrals, series, fractals) are used." A. Adamatzky, Editor-in-Chief of the International Journal of Unconventional Computing. "I am sure that the new approach ... will have a very deep impact both on Mathematics and Computer Science." D. Trigiante, Computational Management Science. "Within the grossone framework, it becomes feasible to deal computationally with infinite quantities, in a way that is both new (in the sense that previously intractable problems become amenable to computation) and natural". R. Gangle, G. Caterina, F. Tohme, Soft Computing. "The computational features offered by the Infinity Computer allow us to dynamically change the accuracy of representation and floating-point operations during the flow of a computation. When suitably implemented, this possibility turns out to be particularly advantageous when solving ill-conditioned problems. In fact, compared with a standard multi-precision arithmetic, here the accuracy is improved only when needed, thus not affecting that much the overall computational effort." P. Amodio, L. Brugnano, F. Iavernaro & F. Mazzia, Soft Computing
Data Presentation with SPSS Explained provides students with all the information they need to conduct small scale analysis of research projects using SPSS and present their results appropriately in their reports. Quantitative data can be collected in the form of a questionnaire, survey or experimental study. This book focuses on presenting this data clearly, in the form of tables and graphs, along with creating basic summary statistics. Data Presentation with SPSS Explained uses an example survey that is clearly explained step-by-step throughout the book. This allows readers to follow the procedures, and easily apply each step in the process to their own research and findings. No prior knowledge of statistics or SPSS is assumed, and everything in the book is carefully explained in a helpful and user-friendly way using worked examples. This book is the perfect companion for students from a range of disciplines including psychology, business, communication, education, health, humanities, marketing and nursing - many of whom are unaware that this extremely helpful program is available at their institution for their use.
This book expounds the principle and related applications of nonlinear principal component analysis (PCA), which is useful method to analyze mixed measurement levels data. In the part dealing with the principle, after a brief introduction of ordinary PCA, a PCA for categorical data (nominal and ordinal) is introduced as nonlinear PCA, in which an optimal scaling technique is used to quantify the categorical variables. The alternating least squares (ALS) is the main algorithm in the method. Multiple correspondence analysis (MCA), a special case of nonlinear PCA, is also introduced. All formulations in these methods are integrated in the same manner as matrix operations. Because any measurement levels data can be treated consistently as numerical data and ALS is a very powerful tool for estimations, the methods can be utilized in a variety of fields such as biometrics, econometrics, psychometrics, and sociology. In the applications part of the book, four applications are introduced: variable selection for mixed measurement levels data, sparse MCA, joint dimension reduction and clustering methods for categorical data, and acceleration of ALS computation. The variable selection methods in PCA that originally were developed for numerical data can be applied to any types of measurement levels by using nonlinear PCA. Sparseness and joint dimension reduction and clustering for nonlinear data, the results of recent studies, are extensions obtained by the same matrix operations in nonlinear PCA. Finally, an acceleration algorithm is proposed to reduce the problem of computational cost in the ALS iteration in nonlinear multivariate methods. This book thus presents the usefulness of nonlinear PCA which can be applied to different measurement levels data in diverse fields. As well, it covers the latest topics including the extension of the traditional statistical method, newly proposed nonlinear methods, and computational efficiency in the methods.
Presenting a comprehensive resource for the mastery of network analysis in R, the goal of Network Analysis with R is to introduce modern network analysis techniques in R to social, physical, and health scientists. The mathematical foundations of network analysis are emphasized in an accessible way and readers are guided through the basic steps of network studies: network conceptualization, data collection and management, network description, visualization, and building and testing statistical models of networks. As with all of the books in the Use R! series, each chapter contains extensive R code and detailed visualizations of datasets. Appendices will describe the R network packages and the datasets used in the book. An R package developed specifically for the book, available to readers on GitHub, contains relevant code and real-world network datasets as well.
This book provides a modern introductory tutorial on specialized theoretical aspects of spatial and temporal modeling. The areas covered involve a range of topics which reflect the diversity of this domain of research across a number of quantitative disciplines. For instance, the first chapter provides up-to-date coverage of particle association measures that underpin the theoretical properties of recently developed random set methods in space and time otherwise known as the class of probability hypothesis density framework (PHD filters). The second chapter gives an overview of recent advances in Monte Carlo methods for Bayesian filtering in high-dimensional spaces. In particular, the chapter explains how one may extend classical sequential Monte Carlo methods for filtering and static inference problems to high dimensions and big-data applications. The third chapter presents an overview of generalized families of processes that extend the class of Gaussian process models to heavy-tailed families known as alpha-stable processes. In particular, it covers aspects of characterization via the spectral measure of heavy-tailed distributions and then provides an overview of their applications in wireless communications channel modeling. The final chapter concludes with an overview of analysis for probabilistic spatial percolation methods that are relevant in the modeling of graphical networks and connectivity applications in sensor networks, which also incorporate stochastic geometry features.
This comprehensive and stimulating introduction to Matlab, a computer language now widely used for technical computing, is based on an introductory course held at Qian Weichang College, Shanghai University, in the fall of 2014. Teaching and learning a substantial programming language aren't always straightforward tasks. Accordingly, this textbook is not meant to cover the whole range of this high-performance technical programming environment, but to motivate first- and second-year undergraduate students in mathematics and computer science to learn Matlab by studying representative problems, developing algorithms and programming them in Matlab. While several topics are taken from the field of scientific computing, the main emphasis is on programming. A wealth of examples are completely discussed and solved, allowing students to learn Matlab by doing: by solving problems, comparing approaches and assessing the proposed solutions.
SAS programming is a creative and iterative process designed to empower you to make the most of your organization's data. This friendly guide provides you with a repertoire of essential SAS tools for data management, whether you are a new or an infrequent user. Most useful to students and programmers with little or no SAS experience, it takes a no-frills, hands-on tutorial approach to getting started with the software. You will find immediate guidance in navigating, exploring, visualizing, cleaning, formatting, and reporting on data using SAS and JMP. Step-by-step demonstrations, screenshots, handy tips, and practical exercises with solutions equip you to explore, interpret, process and summarize data independently, efficiently and effectively.
This book presents multivariate time series methods for the analysis and optimal control of feedback systems. Although ships' autopilot systems are considered through the entire book, the methods set forth in this book can be applied to many other complicated, large, or noisy feedback control systems for which it is difficult to derive a model of the entire system based on theory in that subject area. The basic models used in this method are the multivariate autoregressive model with exogenous variables (ARX) model and the radial bases function net-type coefficients ARX model. The noise contribution analysis can then be performed through the estimated autoregressive (AR) model and various types of autopilot systems can be designed through the state-space representation of the models. The marine autopilot systems addressed in this book include optimal controllers for course-keeping motion, rolling reduction controllers with rudder motion, engine governor controllers, noise adaptive autopilots, route-tracking controllers by direct steering, and the reference course-setting approach. The methods presented here are exemplified with real data analysis and experiments on real ships. This book is highly recommended to readers who are interested in designing optimal or adaptive controllers not only of ships but also of any other complicated systems under noisy disturbance conditions.
This open access book contains review papers authored by thirteen plenary invited speakers to the 9th International Congress on Industrial and Applied Mathematics (Valencia, July 15-19, 2019). Written by top-level scientists recognized worldwide, the scientific contributions cover a wide range of cutting-edge topics of industrial and applied mathematics: mathematical modeling, industrial and environmental mathematics, mathematical biology and medicine, reduced-order modeling and cryptography. The book also includes an introductory chapter summarizing the main features of the congress. This is the first volume of a thematic series dedicated to research results presented at ICIAM 2019-Valencia Congress.
MATLABä-the tremendously popular computation, numerical analysis, signal processing, data analysis, and graphical software package-allows virtually every scientist and engineer to make better and faster progress. As MATLAB's world-wide sales approach a half-million with an estimated four million users, it becomes a near necessity that professionals and students have a level of competence in its use. Until now, however, there has been no book that quickly and effectively introduces MATLAB's capabilities to new users and assists those with more experience down the path toward increasingly sophisticated work. |
![]() ![]() You may like...
Biomedical Diagnostics and Clinical…
Manuela Pereira, Mario Freire
Hardcover
R6,675
Discovery Miles 66 750
Artificial Intelligence Applications and…
Ilias Maglogiannis, Lazaros Iliadis, …
Hardcover
R2,962
Discovery Miles 29 620
Distributed and Sequential Algorithms…
Kayhan Erciyes
Hardcover
Flash Memory Integration - Performance…
Jalil Boukhobza, Pierre Olivier
Hardcover
R1,942
Discovery Miles 19 420
Multi-model Jumping Systems: Robust…
Shuping He, Xiaoli Luan
Hardcover
R2,876
Discovery Miles 28 760
C++ How to Program: Horizon Edition
Harvey Deitel, Paul Deitel
Paperback
R1,917
Discovery Miles 19 170
|