![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
This book introduces readers to statistical methodologies used to analyze doubly truncated data. The first book exclusively dedicated to the topic, it provides likelihood-based methods, Bayesian methods, non-parametric methods, and linear regression methods. These procedures can be used to effectively analyze continuous data, especially survival data arising in biostatistics and economics. Because truncation is a phenomenon that is often encountered in non-experimental studies, the methods presented here can be applied to many branches of science. The book provides R codes for most of the statistical methods, to help readers analyze their data. Given its scope, the book is ideally suited as a textbook for students of statistics, mathematics, econometrics, and other fields.
This book is the modern first treatment of experimental designs, providing a comprehensive introduction to the interrelationship between the theory of optimal designs and the theory of cubature formulas in numerical analysis. It also offers original new ideas for constructing optimal designs. The book opens with some basics on reproducing kernels, and builds up to more advanced topics, including bounds for the number of cubature formula points, equivalence theorems for statistical optimalities, and the Sobolev Theorem for the cubature formula. It concludes with a functional analytic generalization of the above classical results. Although it is intended for readers who are interested in recent advances in the construction theory of optimal experimental designs, the book is also useful for researchers seeking rich interactions between optimal experimental designs and various mathematical subjects such as spherical designs in combinatorics and cubature formulas in numerical analysis, both closely related to embeddings of classical finite-dimensional Banach spaces in functional analysis and Hilbert identities in elementary number theory. Moreover, it provides a novel communication platform for "design theorists" in a wide variety of research fields.
The 2nd edition of R for Marketing Research and Analytics continues to be the best place to learn R for marketing research. This book is a complete introduction to the power of R for marketing research practitioners. The text describes statistical models from a conceptual point of view with a minimal amount of mathematics, presuming only an introductory knowledge of statistics. Hands-on chapters accelerate the learning curve by asking readers to interact with R from the beginning. Core topics include the R language, basic statistics, linear modeling, and data visualization, which is presented throughout as an integral part of analysis. Later chapters cover more advanced topics yet are intended to be approachable for all analysts. These sections examine logistic regression, customer segmentation, hierarchical linear modeling, market basket analysis, structural equation modeling, and conjoint analysis in R. The text uniquely presents Bayesian models with a minimally complex approach, demonstrating and explaining Bayesian methods alongside traditional analyses for analysis of variance, linear models, and metric and choice-based conjoint analysis. With its emphasis on data visualization, model assessment, and development of statistical intuition, this book provides guidance for any analyst looking to develop or improve skills in R for marketing applications. The 2nd edition increases the book's utility for students and instructors with the inclusion of exercises and classroom slides. At the same time, it retains all of the features that make it a vital resource for practitioners: non-mathematical exposition, examples modeled on real world marketing problems, intuitive guidance on research methods, and immediately applicable code.
Written at a readily accessible level, " Basic Data Analysis for Time Series with R" emphasizes the mathematical importance of collaborative analysis of data used to collect increments of time or space. Balancing a theoretical and practical approach to analyzing data within the context of serial correlation, the book presents a coherent and systematic regression-based approach to model selection. The book illustrates these principles of model selection and model building through the use of information criteria, cross validation, hypothesis tests, and confidence intervals. Focusing on frequency- and time-domain and trigonometric regression as the primary themes, the book also includes modern topical coverage on Fourier series and Akaike's Information Criterion (AIC). In addition, "Basic Data Analysis for Time Series with R" also features: Real-world examples to provide readers with practical hands-on experienceMultiple R software subroutines employed with graphical displaysNumerous exercise sets intended to support readers understanding of the core conceptsSpecific chapters devoted to the analysis of the Wolf sunspot number data and the Vostok ice core data sets
This book introduces the main theoretical findings related to copulas and shows how statistical modeling of multivariate continuous distributions using copulas can be carried out in the R statistical environment with the package copula (among others). Copulas are multivariate distribution functions with standard uniform univariate margins. They are increasingly applied to modeling dependence among random variables in fields such as risk management, actuarial science, insurance, finance, engineering, hydrology, climatology, and meteorology, to name a few. In the spirit of the Use R! series, each chapter combines key theoretical definitions or results with illustrations in R. Aimed at statisticians, actuaries, risk managers, engineers and environmental scientists wanting to learn about the theory and practice of copula modeling using R without an overwhelming amount of mathematics, the book can also be used for teaching a course on copula modeling.
This text presents a wide-ranging and rigorous overview of nearest neighbor methods, one of the most important paradigms in machine learning. Now in one self-contained volume, this book systematically covers key statistical, probabilistic, combinatorial and geometric ideas for understanding, analyzing and developing nearest neighbor methods. Gerard Biau is a professor at Universite Pierre et Marie Curie (Paris). Luc Devroye is a professor at the School of Computer Science at McGill University (Montreal).
This book brings together selected peer-reviewed contributions from various research fields in statistics, and highlights the diverse approaches and analyses related to real-life phenomena. Major topics covered in this volume include, but are not limited to, bayesian inference, likelihood approach, pseudo-likelihoods, regression, time series, and data analysis as well as applications in the life and social sciences. The software packages used in the papers are made available by the authors. This book is a result of the 47th Scientific Meeting of the Italian Statistical Society, held at the University of Cagliari, Italy, in 2014.
This book deals with problems related to the evaluation of customer satisfaction in very different contexts and ways. Often satisfaction about a product or service is investigated through suitable surveys which try to capture the satisfaction about several partial aspects which characterize the perceived quality of that product or service. This book presents a series of statistical techniques adopted to analyze data from real situations where customer satisfaction surveys were performed. The aim is to give a simple guide of the variety of analysis that can be performed when analyzing data from sample surveys: starting from latent variable models to heterogeneity in satisfaction and also introducing some testing methods for comparing different customers. The book also discusses the construction of composite indicators including different benchmarks of satisfaction. Finally, some rank-based procedures for analyzing survey data are also shown.
The subject of this book stands at the crossroads of ergodic theory and measurable dynamics. With an emphasis on irreversible systems, the text presents a framework of multi-resolutions tailored for the study of endomorphisms, beginning with a systematic look at the latter. This entails a whole new set of tools, often quite different from those used for the "easier" and well-documented case of automorphisms. Among them is the construction of a family of positive operators (transfer operators), arising naturally as a dual picture to that of endomorphisms. The setting (close to one initiated by S. Karlin in the context of stochastic processes) is motivated by a number of recent applications, including wavelets, multi-resolution analyses, dissipative dynamical systems, and quantum theory. The automorphism-endomorphism relationship has parallels in operator theory, where the distinction is between unitary operators in Hilbert space and more general classes of operators such as contractions. There is also a non-commutative version: While the study of automorphisms of von Neumann algebras dates back to von Neumann, the systematic study of their endomorphisms is more recent; together with the results in the main text, the book includes a review of recent related research papers, some by the co-authors and their collaborators.
The aim of this textbook (previously titled SAS for Data Analytics) is to teach the use of SAS for statistical analysis of data for advanced undergraduate and graduate students in statistics, data science, and disciplines involving analyzing data. The book begins with an introduction beyond the basics of SAS, illustrated with non-trivial, real-world, worked examples. It proceeds to SAS programming and applications, SAS graphics, statistical analysis of regression models, analysis of variance models, analysis of variance with random and mixed effects models, and then takes the discussion beyond regression and analysis of variance to conclude. Pedagogically, the authors introduce theory and methodological basis topic by topic, present a problem as an application, followed by a SAS analysis of the data provided and a discussion of results. The text focuses on applied statistical problems and methods. Key features include: end of chapter exercises, downloadable SAS code and data sets, and advanced material suitable for a second course in applied statistics with every method explained using SAS analysis to illustrate a real-world problem. New to this edition: * Covers SAS v9.2 and incorporates new commands * Uses SAS ODS (output delivery system) for reproduction of tables and graphics output * Presents new commands needed to produce ODS output * All chapters rewritten for clarity * New and updated examples throughout * All SAS outputs are new and updated, including graphics * More exercises and problems * Completely new chapter on analysis of nonlinear and generalized linear models * Completely new appendix Mervyn G. Marasinghe, PhD, is Associate Professor Emeritus of Statistics at Iowa State University, where he has taught courses in statistical methods and statistical computing. Kenneth J. Koehler, PhD, is University Professor of Statistics at Iowa State University, where he teaches courses in statistical methodology at both graduate and undergraduate levels and primarily uses SAS to supplement his teaching.
This book presents a detailed description of the development of statistical theory. In the mid twentieth century, the development of mathematical statistics underwent an enduring change, due to the advent of more refined mathematical tools. New concepts like sufficiency, superefficiency, adaptivity etc. motivated scholars to reflect upon the interpretation of mathematical concepts in terms of their real-world relevance. Questions concerning the optimality of estimators, for instance, had remained unanswered for decades, because a meaningful concept of optimality (based on the regularity of the estimators, the representation of their limit distribution and assertions about their concentration by means of Anderson's Theorem) was not yet available. The rapidly developing asymptotic theory provided approximate answers to questions for which non-asymptotic theory had found no satisfying solutions. In four engaging essays, this book presents a detailed description of how the use of mathematical methods stimulated the development of a statistical theory. Primarily focused on methodology, questionable proofs and neglected questions of priority, the book offers an intriguing resource for researchers in theoretical statistics, and can also serve as a textbook for advanced courses in statisticc.
This book presents new findings on nonregular statistical estimation. Unlike other books on this topic, its major emphasis is on helping readers understand the meaning and implications of both regularity and irregularity through a certain family of distributions. In particular, it focuses on a truncated exponential family of distributions with a natural parameter and truncation parameter as a typical nonregular family. This focus includes the (truncated) Pareto distribution, which is widely used in various fields such as finance, physics, hydrology, geology, astronomy, and other disciplines. The family is essential in that it links both regular and nonregular distributions, as it becomes a regular exponential family if the truncation parameter is known. The emphasis is on presenting new results on the maximum likelihood estimation of a natural parameter or truncation parameter if one of them is a nuisance parameter. In order to obtain more information on the truncation, the Bayesian approach is also considered. Further, the application to some useful truncated distributions is discussed. The illustrated clarification of the nonregular structure provides researchers and practitioners with a solid basis for further research and applications.
This book expounds the principle and related applications of nonlinear principal component analysis (PCA), which is useful method to analyze mixed measurement levels data. In the part dealing with the principle, after a brief introduction of ordinary PCA, a PCA for categorical data (nominal and ordinal) is introduced as nonlinear PCA, in which an optimal scaling technique is used to quantify the categorical variables. The alternating least squares (ALS) is the main algorithm in the method. Multiple correspondence analysis (MCA), a special case of nonlinear PCA, is also introduced. All formulations in these methods are integrated in the same manner as matrix operations. Because any measurement levels data can be treated consistently as numerical data and ALS is a very powerful tool for estimations, the methods can be utilized in a variety of fields such as biometrics, econometrics, psychometrics, and sociology. In the applications part of the book, four applications are introduced: variable selection for mixed measurement levels data, sparse MCA, joint dimension reduction and clustering methods for categorical data, and acceleration of ALS computation. The variable selection methods in PCA that originally were developed for numerical data can be applied to any types of measurement levels by using nonlinear PCA. Sparseness and joint dimension reduction and clustering for nonlinear data, the results of recent studies, are extensions obtained by the same matrix operations in nonlinear PCA. Finally, an acceleration algorithm is proposed to reduce the problem of computational cost in the ALS iteration in nonlinear multivariate methods. This book thus presents the usefulness of nonlinear PCA which can be applied to different measurement levels data in diverse fields. As well, it covers the latest topics including the extension of the traditional statistical method, newly proposed nonlinear methods, and computational efficiency in the methods.
Data Presentation with SPSS Explained provides students with all the information they need to conduct small scale analysis of research projects using SPSS and present their results appropriately in their reports. Quantitative data can be collected in the form of a questionnaire, survey or experimental study. This book focuses on presenting this data clearly, in the form of tables and graphs, along with creating basic summary statistics. Data Presentation with SPSS Explained uses an example survey that is clearly explained step-by-step throughout the book. This allows readers to follow the procedures, and easily apply each step in the process to their own research and findings. No prior knowledge of statistics or SPSS is assumed, and everything in the book is carefully explained in a helpful and user-friendly way using worked examples. This book is the perfect companion for students from a range of disciplines including psychology, business, communication, education, health, humanities, marketing and nursing - many of whom are unaware that this extremely helpful program is available at their institution for their use.
This proceedings volume contains eight selected papers that were presented in the International Symposium in Statistics (ISS) 2015 On Advances in Parametric and Semi-parametric Analysis of Multivariate, Time Series, Spatial-temporal, and Familial-longitudinal Data, held in St. John's, Canada from July 6 to 8, 2015. The main objective of the ISS-2015 was the discussion on advances and challenges in parametric and semi-parametric analysis for correlated data in both continuous and discrete setups. Thus, as a reflection of the theme of the symposium, the eight papers of this proceedings volume are presented in four parts. Part I is comprised of papers examining Elliptical t Distribution Theory. In Part II, the papers cover spatial and temporal data analysis. Part III is focused on longitudinal multinomial models in parametric and semi-parametric setups. Finally Part IV concludes with a paper on the inferences for longitudinal data subject to a challenge of important covariates selection from a set of large number of covariates available for the individuals in the study.
This book provides a friendly introduction to the paradigm and proposes a broad panorama of killing applications of the Infinity Computer in optimization: radically new numerical algorithms, great theoretical insights, efficient software implementations, and interesting practical case studies. This is the first book presenting to the readers interested in optimization the advantages of a recently introduced supercomputing paradigm that allows to numerically work with different infinities and infinitesimals on the Infinity Computer patented in several countries. One of the editors of the book is the creator of the Infinity Computer, and another editor was the first who has started to use it in optimization. Their results were awarded by numerous scientific prizes. This engaging book opens new horizons for researchers, engineers, professors, and students with interests in supercomputing paradigms, optimization, decision making, game theory, and foundations of mathematics and computer science. "Mathematicians have never been comfortable handling infinities... But an entirely new type of mathematics looks set to by-pass the problem... Today, Yaroslav Sergeyev, a mathematician at the University of Calabria in Italy solves this problem... " MIT Technology Review "These ideas and future hardware prototypes may be productive in all fields of science where infinite and infinitesimal numbers (derivatives, integrals, series, fractals) are used." A. Adamatzky, Editor-in-Chief of the International Journal of Unconventional Computing. "I am sure that the new approach ... will have a very deep impact both on Mathematics and Computer Science." D. Trigiante, Computational Management Science. "Within the grossone framework, it becomes feasible to deal computationally with infinite quantities, in a way that is both new (in the sense that previously intractable problems become amenable to computation) and natural". R. Gangle, G. Caterina, F. Tohme, Soft Computing. "The computational features offered by the Infinity Computer allow us to dynamically change the accuracy of representation and floating-point operations during the flow of a computation. When suitably implemented, this possibility turns out to be particularly advantageous when solving ill-conditioned problems. In fact, compared with a standard multi-precision arithmetic, here the accuracy is improved only when needed, thus not affecting that much the overall computational effort." P. Amodio, L. Brugnano, F. Iavernaro & F. Mazzia, Soft Computing
This textbook offers an algorithmic introduction to the field of computer algebra. A leading expert in the field, the author guides readers through numerous hands-on tutorials designed to build practical skills and algorithmic thinking. This implementation-oriented approach equips readers with versatile tools that can be used to enhance studies in mathematical theory, applications, or teaching. Presented using Mathematica code, the book is fully supported by downloadable sessions in Mathematica, Maple, and Maxima. Opening with an introduction to computer algebra systems and the basics of programming mathematical algorithms, the book goes on to explore integer arithmetic. A chapter on modular arithmetic completes the number-theoretic foundations, which are then applied to coding theory and cryptography. From here, the focus shifts to polynomial arithmetic and algebraic numbers, with modern algorithms allowing the efficient factorization of polynomials. The final chapters offer extensions into more advanced topics: simplification and normal forms, power series, summation formulas, and integration. Computer Algebra is an indispensable resource for mathematics and computer science students new to the field. Numerous examples illustrate algorithms and their implementation throughout, with online support materials to encourage hands-on exploration. Prerequisites are minimal, with only a knowledge of calculus and linear algebra assumed. In addition to classroom use, the elementary approach and detailed index make this book an ideal reference for algorithms in computer algebra.
Presenting a comprehensive resource for the mastery of network analysis in R, the goal of Network Analysis with R is to introduce modern network analysis techniques in R to social, physical, and health scientists. The mathematical foundations of network analysis are emphasized in an accessible way and readers are guided through the basic steps of network studies: network conceptualization, data collection and management, network description, visualization, and building and testing statistical models of networks. As with all of the books in the Use R! series, each chapter contains extensive R code and detailed visualizations of datasets. Appendices will describe the R network packages and the datasets used in the book. An R package developed specifically for the book, available to readers on GitHub, contains relevant code and real-world network datasets as well.
"Modeling with Data" fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different problems, how to create and debug statistical models, and how to run an analysis and evaluate the results. Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and computationally intensive procedures. He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods. Klemens's accessible survey describes these models in a unified and nontraditional manner, providing alternative ways of looking at statistical concepts that often befuddle students. The book includes nearly one hundred sample programs of all kinds. Links to these programs will be available on this page at a later date. "Modeling with Data" will interest anyone looking for a comprehensive guide to these powerful statistical tools, including researchers and graduate students in the social sciences, biology, engineering, economics, and applied mathematics.
This book provides a modern introductory tutorial on specialized theoretical aspects of spatial and temporal modeling. The areas covered involve a range of topics which reflect the diversity of this domain of research across a number of quantitative disciplines. For instance, the first chapter provides up-to-date coverage of particle association measures that underpin the theoretical properties of recently developed random set methods in space and time otherwise known as the class of probability hypothesis density framework (PHD filters). The second chapter gives an overview of recent advances in Monte Carlo methods for Bayesian filtering in high-dimensional spaces. In particular, the chapter explains how one may extend classical sequential Monte Carlo methods for filtering and static inference problems to high dimensions and big-data applications. The third chapter presents an overview of generalized families of processes that extend the class of Gaussian process models to heavy-tailed families known as alpha-stable processes. In particular, it covers aspects of characterization via the spectral measure of heavy-tailed distributions and then provides an overview of their applications in wireless communications channel modeling. The final chapter concludes with an overview of analysis for probabilistic spatial percolation methods that are relevant in the modeling of graphical networks and connectivity applications in sensor networks, which also incorporate stochastic geometry features.
This comprehensive and stimulating introduction to Matlab, a computer language now widely used for technical computing, is based on an introductory course held at Qian Weichang College, Shanghai University, in the fall of 2014. Teaching and learning a substantial programming language aren't always straightforward tasks. Accordingly, this textbook is not meant to cover the whole range of this high-performance technical programming environment, but to motivate first- and second-year undergraduate students in mathematics and computer science to learn Matlab by studying representative problems, developing algorithms and programming them in Matlab. While several topics are taken from the field of scientific computing, the main emphasis is on programming. A wealth of examples are completely discussed and solved, allowing students to learn Matlab by doing: by solving problems, comparing approaches and assessing the proposed solutions.
Now updated for SPSS (R) Statistics Version 18, DOING DATA ANALYSIS WITH SPSS, 5e, International Edition is an excellent supplement to any introductory statistics course. It provides a practical and useful introduction to SPSS and enables students to work independently to learn helpful software skills outside of class. By using SPSS to handle complex computations, students can focus on and gain an understanding of the underlying statistical concepts and techniques in the introductory statistics course.
This book presents multivariate time series methods for the analysis and optimal control of feedback systems. Although ships' autopilot systems are considered through the entire book, the methods set forth in this book can be applied to many other complicated, large, or noisy feedback control systems for which it is difficult to derive a model of the entire system based on theory in that subject area. The basic models used in this method are the multivariate autoregressive model with exogenous variables (ARX) model and the radial bases function net-type coefficients ARX model. The noise contribution analysis can then be performed through the estimated autoregressive (AR) model and various types of autopilot systems can be designed through the state-space representation of the models. The marine autopilot systems addressed in this book include optimal controllers for course-keeping motion, rolling reduction controllers with rudder motion, engine governor controllers, noise adaptive autopilots, route-tracking controllers by direct steering, and the reference course-setting approach. The methods presented here are exemplified with real data analysis and experiments on real ships. This book is highly recommended to readers who are interested in designing optimal or adaptive controllers not only of ships but also of any other complicated systems under noisy disturbance conditions. |
![]() ![]() You may like...
Essential Java for Scientists and…
Brian Hahn, Katherine Malan
Paperback
R1,268
Discovery Miles 12 680
An Introduction to Creating Standardized…
Todd Case, Yuting Tian
Hardcover
R1,608
Discovery Miles 16 080
Neutrosophic Sets in Decision Analysis…
Mohamed Abdel-Basset, Florentin Smarandache
Hardcover
R7,198
Discovery Miles 71 980
Jump into JMP Scripting, Second Edition…
Wendy Murphrey, Rosemary Lucas
Hardcover
R1,613
Discovery Miles 16 130
Mathematical Modeling for Smart…
Debabrata Samanta, Debabrata Singh
Hardcover
R12,404
Discovery Miles 124 040
SAS for Mixed Models - Introduction and…
Walter W. Stroup, George A. Milliken, …
Hardcover
R3,147
Discovery Miles 31 470
|