Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
This volume of selected and peer-reviewed contributions on the latest developments in time series analysis and forecasting updates the reader on topics such as analysis of irregularly sampled time series, multi-scale analysis of univariate and multivariate time series, linear and non-linear time series models, advanced time series forecasting methods, applications in time series analysis and forecasting, advanced methods and online learning in time series and high-dimensional and complex/big data time series. The contributions were originally presented at the International Work-Conference on Time Series, ITISE 2016, held in Granada, Spain, June 27-29, 2016. The series of ITISE conferences provides a forum for scientists, engineers, educators and students to discuss the latest ideas and implementations in the foundations, theory, models and applications in the field of time series analysis and forecasting. It focuses on interdisciplinary and multidisciplinary research encompassing the disciplines of computer science, mathematics, statistics and econometrics.
Growth curve models in longitudinal studies are widely used to model population size, body height, biomass, fungal growth, and other variables in the biological sciences, but these statistical methods for modeling growth curves and analyzing longitudinal data also extend to general statistics, economics, public health, demographics, epidemiology, SQC, sociology, nano-biotechnology, fluid mechanics, and other applied areas. There is no one-size-fits-all approach to growth measurement. The selected papers in this volume build on presentations from the GCM workshop held at the Indian Statistical Institute, Giridih, on March 28-29, 2016. They represent recent trends in GCM research on different subject areas, both theoretical and applied. This book includes tools and possibilities for further work through new techniques and modification of existing ones. The volume includes original studies, theoretical findings and case studies from a wide range of applied work, and these contributions have been externally refereed to the high quality standards of leading journals in the field.
This book presents the latest research on the statistical analysis of functional, high-dimensional and other complex data, addressing methodological and computational aspects, as well as real-world applications. It covers topics like classification, confidence bands, density estimation, depth, diagnostic tests, dimension reduction, estimation on manifolds, high- and infinite-dimensional statistics, inference on functional data, networks, operatorial statistics, prediction, regression, robustness, sequential learning, small-ball probability, smoothing, spatial data, testing, and topological object data analysis, and includes applications in automobile engineering, criminology, drawing recognition, economics, environmetrics, medicine, mobile phone data, spectrometrics and urban environments. The book gathers selected, refereed contributions presented at the Fifth International Workshop on Functional and Operatorial Statistics (IWFOS) in Brno, Czech Republic. The workshop was originally to be held on June 24-26, 2020, but had to be postponed as a consequence of the COVID-19 pandemic. Initiated by the Working Group on Functional and Operatorial Statistics at the University of Toulouse in 2008, the IWFOS workshops provide a forum to discuss the latest trends and advances in functional statistics and related fields, and foster the exchange of ideas and international collaboration in the field.
The advent of high-speed, affordable computers in the last two decades has given a new boost to the nonparametric way of thinking. Classical nonparametric procedures, such as function smoothing, suddenly lost their abstract flavour as they became practically implementable. In addition, many previously unthinkable possibilities became mainstream; prime examples include the bootstrap and resampling methods, wavelets and nonlinear smoothers, graphical methods, data mining, bioinformatics, as well as the more recent algorithmic approaches such as bagging and boosting. This volume is a collection of short articles - most of which having a review component - describing the state-of-the art of Nonparametric Statistics at the beginning of a new millennium.
This book presents a comprehensive study of multivariate time series with linear state space structure. The emphasis is put on both the clarity of the theoretical concepts and on efficient algorithms for implementing the theory. In particular, it investigates the relationship between VARMA and state space models, including canonical forms. It also highlights the relationship between Wiener-Kolmogorov and Kalman filtering both with an infinite and a finite sample. The strength of the book also lies in the numerous algorithms included for state space models that take advantage of the recursive nature of the models. Many of these algorithms can be made robust, fast, reliable and efficient. The book is accompanied by a MATLAB package called SSMMATLAB and a webpage presenting implemented algorithms with many examples and case studies. Though it lays a solid theoretical foundation, the book also focuses on practical application, and includes exercises in each chapter. It is intended for researchers and students working with linear state space models, and who are familiar with linear algebra and possess some knowledge of statistics.
This uniquely accessible book helps readers use CABology to solve real-world business problems and drive real competitive advantage. It provides reliable, concise information on the real benefits, usage and operationalization aspects of utilizing the "Trio Wave" of cloud, analytic and big data. Anyone who thinks that the game changing technology is slow paced needs to think again. This book opens readers' eyes to the fact that the dynamics of global technology and business are changing. Moreover, it argues that businesses must transform themselves in alignment with the Trio Wave if they want to survive and excel in the future. CABology focuses on the art and science of optimizing the business goals to deliver true value and benefits to the customer through cloud, analytic and big data. It offers business of all sizes a structured and comprehensive way of discovering the real benefits, usage and operationalization aspects of utilizing the Trio Wave.
This book features research contributions from The Abel Symposium on Statistical Analysis for High Dimensional Data, held in Nyvagar, Lofoten, Norway, in May 2014. The focus of the symposium was on statistical and machine learning methodologies specifically developed for inference in "big data" situations, with particular reference to genomic applications. The contributors, who are among the most prominent researchers on the theory of statistics for high dimensional inference, present new theories and methods, as well as challenging applications and computational solutions. Specific themes include, among others, variable selection and screening, penalised regression, sparsity, thresholding, low dimensional structures, computational challenges, non-convex situations, learning graphical models, sparse covariance and precision matrices, semi- and non-parametric formulations, multiple testing, classification, factor models, clustering, and preselection. Highlighting cutting-edge research and casting light on future research directions, the contributions will benefit graduate students and researchers in computational biology, statistics and the machine learning community.
The book covers computational statistics, its methodologies and applications for IoT device. It includes the details in the areas of computational arithmetic and its influence on computational statistics, numerical algorithms in statistical application software, basics of computer systems, statistical techniques, linear algebra and its role in optimization techniques, evolution of optimization techniques, optimal utilization of computer resources, and statistical graphics role in data analysis. It also explores computational inferencing and computer model's role in design of experiments, Bayesian analysis, survival analysis and data mining in computational statistics.
This book is a selection of peer-reviewed contributions presented at the third Bayesian Young Statisticians Meeting, BAYSM 2016, Florence, Italy, June 19-21. The meeting provided a unique opportunity for young researchers, M.S. students, Ph.D. students, and postdocs dealing with Bayesian statistics to connect with the Bayesian community at large, to exchange ideas, and to network with others working in the same field. The contributions develop and apply Bayesian methods in a variety of fields, ranging from the traditional (e.g., biostatistics and reliability) to the most innovative ones (e.g., big data and networks).
This book offers an original and broad exploration of the fundamental methods in Clustering and Combinatorial Data Analysis, presenting new formulations and ideas within this very active field. With extensive introductions, formal and mathematical developments and real case studies, this book provides readers with a deeper understanding of the mutual relationships between these methods, which are clearly expressed with respect to three facets: logical, combinatorial and statistical. Using relational mathematical representation, all types of data structures can be handled in precise and unified ways which the author highlights in three stages: Clustering a set of descriptive attributes Clustering a set of objects or a set of object categories Establishing correspondence between these two dual clusterings Tools for interpreting the reasons of a given cluster or clustering are also included. Foundations and Methods in Combinatorial and Statistical Data Analysis and Clustering will be a valuable resource for students and researchers who are interested in the areas of Data Analysis, Clustering, Data Mining and Knowledge Discovery.
This book discusses examples in parametric inference with R. Combining basic theory with modern approaches, it presents the latest developments and trends in statistical inference for students who do not have an advanced mathematical and statistical background. The topics discussed in the book are fundamental and common to many fields of statistical inference and thus serve as a point of departure for in-depth study. The book is divided into eight chapters: Chapter 1 provides an overview of topics on sufficiency and completeness, while Chapter 2 briefly discusses unbiased estimation. Chapter 3 focuses on the study of moments and maximum likelihood estimators, and Chapter 4 presents bounds for the variance. In Chapter 5, topics on consistent estimator are discussed. Chapter 6 discusses Bayes, while Chapter 7 studies some more powerful tests. Lastly, Chapter 8 examines unbiased and other tests. Senior undergraduate and graduate students in statistics and mathematics, and those who have taken an introductory course in probability, will greatly benefit from this book. Students are expected to know matrix algebra, calculus, probability and distribution theory before beginning this course. Presenting a wealth of relevant solved and unsolved problems, the book offers an excellent tool for teachers and instructors who can assign homework problems from the exercises, and students will find the solved examples hugely beneficial in solving the exercise problems.
This book is a comprehensive guide to qualitative comparative analysis (QCA) using R. Using Boolean algebra to implement principles of comparison used by scholars engaged in the qualitative study of macro social phenomena, QCA acts as a bridge between the quantitative and the qualitative traditions. The QCA package for R, created by the author, facilitates QCA within a graphical user interface. This book provides the most current information on the latest version of the QCA package, which combines written commands with a cross-platform interface. Beginning with a brief introduction to the concept of QCA, this book moves from theory to calibration, from analysis to factorization, and hits on all the key areas of QCA in between. Chapters one through three are introductory, familiarizing the reader with R, the QCA package, and elementary set theory. The next few chapters introduce important applications of the package beginning with calibration, analysis of necessity, analysis of sufficiency, parameters of fit, negation and factorization, and the construction of Venn diagrams. The book concludes with extensions to the classical package, including temporal applications and panel data. Providing a practical introduction to an increasingly important research tool for the social sciences, this book will be indispensable for students, scholars, and practitioners interested in conducting qualitative research in political science, sociology, business and management, and evaluation studies.
This proceedings volume features top contributions in modern statistical methods from Statistics 2021 Canada, the 6th Annual Canadian Conference in Applied Statistics, held virtually on July 15-18, 2021. Papers are contributed from established and emerging scholars, covering cutting-edge and contemporary innovative techniques in statistics and data science. Major areas of contribution include Bayesian statistics; computational statistics; data science; semi-parametric regression; and stochastic methods in biology, crop science, ecology and engineering. It will be a valuable edited collection for graduate students, researchers, and practitioners in a wide array of applied statistical and data science methods.
This book covers original research and the latest advances in symbolic, algebraic and geometric computation; computational methods for differential and difference equations, symbolic-numerical computation; mathematics software design and implementation; and scientific and engineering applications based on features, invited talks, special sessions and contributed papers presented at the 9th (in Fukuoka, Japan in 2009) and 10th (in Beijing China in 2012) Asian Symposium on Computer Mathematics (ASCM). Thirty selected and refereed articles in the book present the conference participants' ideas and views on researching mathematics using computers.
This volume collects selected, peer-reviewed contributions from the 2nd Conference of the International Society for Nonparametric Statistics (ISNPS), held in Cadiz (Spain) between June 11-16 2014, and sponsored by the American Statistical Association, the Institute of Mathematical Statistics, the Bernoulli Society for Mathematical Statistics and Probability, the Journal of Nonparametric Statistics and Universidad Carlos III de Madrid. The 15 articles are a representative sample of the 336 contributed papers presented at the conference. They cover topics such as high-dimensional data modelling, inference for stochastic processes and for dependent data, nonparametric and goodness-of-fit testing, nonparametric curve estimation, object-oriented data analysis, and semiparametric inference. The aim of the ISNPS 2014 conference was to bring together recent advances and trends in several areas of nonparametric statistics in order to facilitate the exchange of research ideas, promote collaboration among researchers from around the globe, and contribute to the further development of the field.
This textbook examines empirical linguistics from a theoretical linguist's perspective. It provides both a theoretical discussion of what quantitative corpus linguistics entails and detailed, hands-on, step-by-step instructions to implement the techniques in the field. The statistical methodology and R-based coding from this book teach readers the basic and then more advanced skills to work with large data sets in their linguistics research and studies. Massive data sets are now more than ever the basis for work that ranges from usage-based linguistics to the far reaches of applied linguistics. This book presents much of the methodology in a corpus-based approach. However, the corpus-based methods in this book are also essential components of recent developments in sociolinguistics, historical linguistics, computational linguistics, and psycholinguistics. Material from the book will also be appealing to researchers in digital humanities and the many non-linguistic fields that use textual data analysis and text-based sensorimetrics. Chapters cover topics including corpus processing, frequencing data, and clustering methods. Case studies illustrate each chapter with accompanying data sets, R code, and exercises for use by readers. This book may be used in advanced undergraduate courses, graduate courses, and self-study.
This text presents a wide-ranging and rigorous overview of nearest neighbor methods, one of the most important paradigms in machine learning. Now in one self-contained volume, this book systematically covers key statistical, probabilistic, combinatorial and geometric ideas for understanding, analyzing and developing nearest neighbor methods. Gerard Biau is a professor at Universite Pierre et Marie Curie (Paris). Luc Devroye is a professor at the School of Computer Science at McGill University (Montreal).
This book is designed as a gentle introduction to the fascinating field of choice modeling and its practical implementation using the R language. Discrete choice analysis is a family of methods useful to study individual decision-making. With strong theoretical foundations in consumer behavior, discrete choice models are used in the analysis of health policy, transportation systems, marketing, economics, public policy, political science, urban planning, and criminology, to mention just a few fields of application. The book does not assume prior knowledge of discrete choice analysis or R, but instead strives to introduce both in an intuitive way, starting from simple concepts and progressing to more sophisticated ideas. Loaded with a wealth of examples and code, the book covers the fundamentals of data and analysis in a progressive way. Readers begin with simple data operations and the underlying theory of choice analysis and conclude by working with sophisticated models including latent class logit models, mixed logit models, and ordinal logit models with taste heterogeneity. Data visualization is emphasized to explore both the input data as well as the results of models. This book should be of interest to graduate students, faculty, and researchers conducting empirical work using individual level choice data who are approaching the field of discrete choice analysis for the first time. In addition, it should interest more advanced modelers wishing to learn about the potential of R for discrete choice analysis. By embedding the treatment of choice modeling within the R ecosystem, readers benefit from learning about the larger R family of packages for data exploration, analysis, and visualization.
This book discusses recent developments in mathematical programming and game theory, and the application of several mathematical models to problems in finance, games, economics and graph theory. All contributing authors are eminent researchers in their respective fields, from across the world. This book contains a collection of selected papers presented at the 2017 Symposium on Mathematical Programming and Game Theory at New Delhi during 9-11 January 2017. Researchers, professionals and graduate students will find the book an essential resource for current work in mathematical programming, game theory and their applications in finance, economics and graph theory. The symposium provides a forum for new developments and applications of mathematical programming and game theory as well as an excellent opportunity to disseminate the latest major achievements and to explore new directions and perspectives.
The advent of fast and sophisticated computer graphics has brought dynamic and interactive images under the control of professional mathematicians and mathematics teachers. This volume in the NATO Special Programme on Advanced Educational Technology takes a comprehensive and critical look at how the computer can support the use of visual images in mathematical problem solving. The contributions are written by researchers and teachers from a variety of disciplines including computer science, mathematics, mathematics education, psychology, and design. Some focus on the use of external visual images and others on the development of individual mental imagery. The book is the first collected volume in a research area that is developing rapidly, and the authors pose some challenging new questions. |
You may like...
Neutrosophic Sets in Decision Analysis…
Mohamed Abdel-Basset, Florentin Smarandache
Hardcover
R7,198
Discovery Miles 71 980
Jump into JMP Scripting, Second Edition…
Wendy Murphrey, Rosemary Lucas
Hardcover
R1,613
Discovery Miles 16 130
An Introduction to Creating Standardized…
Todd Case, Yuting Tian
Hardcover
R1,608
Discovery Miles 16 080
SAS Certification Prep Guide…
Joni N Shreve, Donna Dea Holland
Hardcover
R2,922
Discovery Miles 29 220
Mathematical Modeling for Smart…
Debabrata Samanta, Debabrata Singh
Hardcover
R12,404
Discovery Miles 124 040
Theoretical, Modelling and Numerical…
Samsul Ariffin Abdul Karim
Hardcover
R2,860
Discovery Miles 28 600
|