![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
The second edition of Robust Statistical Methods with R provides a systematic treatment of robust procedures with an emphasis on new developments and on the computational aspects. There are many numerical examples and notes on the R environment, and the updated chapter on the multivariate model contains additional material on visualization of multivariate data in R. A new chapter on robust procedures in measurement error models concentrates mainly on the rank procedures, less sensitive to errors than other procedures. This book will be an invaluable resource for researchers and postgraduate students in statistics and mathematics. Features * Provides a systematic, practical treatment of robust statistical methods * Offers a rigorous treatment of the whole range of robust methods, including the sequential versions of estimators, their moment convergence, and compares their asymptotic and finite-sample behavior * The extended account of multivariate models includes the admissibility, shrinkage effects and unbiasedness of two-sample tests * Illustrates the small sensitivity of the rank procedures in the measurement error model * Emphasizes the computational aspects, supplies many examples and illustrations, and provides the own procedures of the authors in the R software on the book's website
The Second Bayesian Young Statisticians Meeting (BAYSM 2014) and the research presented here facilitate connections among researchers using Bayesian Statistics by providing a forum for the development and exchange of ideas. WU Vienna University of Business and Economics hosted BAYSM 2014 from September 18th to the 19th. The guidance of renowned plenary lecturers and senior discussants is a critical part of the meeting and this volume, which follows publication of contributions from BAYSM 2013. The meeting's scientific program reflected the variety of fields in which Bayesian methods are currently employed or could be introduced in the future. Three brilliant keynote lectures by Chris Holmes (University of Oxford), Christian Robert (Universite Paris-Dauphine), and Mike West (Duke University), were complemented by 24 plenary talks covering the major topics Dynamic Models, Applications, Bayesian Nonparametrics, Biostatistics, Bayesian Methods in Economics, and Models and Methods, as well as a lively poster session with 30 contributions. Selected contributions have been drawn from the conference for this book. All contributions in this volume are peer-reviewed and share original research in Bayesian computation, application, and theory.
This text is intended for a broad audience as both an introduction to predictive models as well as a guide to applying them. Non- mathematical readers will appreciate the intuitive explanations of the techniques while an emphasis on problem-solving with real data across a wide variety of applications will aid practitioners who wish to extend their expertise. Readers should have knowledge of basic statistical ideas, such as correlation and linear regression analysis. While the text is biased against complex equations, a mathematical background is needed for advanced topics. Dr. Kuhn is a Director of Non-Clinical Statistics at Pfizer Global R&D in Groton Connecticut. He has been applying predictive models in the pharmaceutical and diagnostic industries for over 15 years and is the author of a number of R packages. Dr. Johnson has more than a decade of statistical consulting and predictive modeling experience in pharmaceutical research and development. He is a co-founder of Arbor Analytics, a firm specializing in predictive modeling and is a former Director of Statistics at Pfizer Global R&D. His scholarly work centers on the application and development of statistical methodology and learning algorithms. Applied Predictive Modeling covers the overall predictive modeling process, beginning with the crucial steps of data preprocessing, data splitting and foundations of model tuning. The text then provides intuitive explanations of numerous common and modern regression and classification techniques, always with an emphasis on illustrating and solving real data problems. Addressing practical concerns extends beyond model fitting to topics such as handling class imbalance, selecting predictors, and pinpointing causes of poor model performance-all of which are problems that occur frequently in practice. The text illustrates all parts of the modeling process through many hands-on, real-life examples. And every chapter contains extensive R code f
This research monograph utilizes exact and Monte Carlo permutation statistical methods to generate probability values and measures of effect size for a variety of measures of association. Association is broadly defined to include measures of correlation for two interval-level variables, measures of association for two nominal-level variables or two ordinal-level variables, and measures of agreement for two nominal-level or two ordinal-level variables. Additionally, measures of association for mixtures of the three levels of measurement are considered: nominal-ordinal, nominal-interval, and ordinal-interval measures. Numerous comparisons of permutation and classical statistical methods are presented. Unlike classical statistical methods, permutation statistical methods do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This book takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field. This topic is relatively new in that it took modern computing power to make permutation methods available to those working in mainstream research. Written for a statistically informed audience, it is particularly useful for teachers of statistics, practicing statisticians, applied statisticians, and quantitative graduate students in fields such as psychology, medical research, epidemiology, public health, and biology. It can also serve as a textbook in graduate courses in subjects like statistics, psychology, and biology.
"R for Business Analytics" looks at some of the most common tasks performed by business analysts and helps the user navigate the wealth of information in R and its 4000 packages. With this information the reader can select the packages that can help process the analytical tasks with minimum effort and maximum usefulness. The use of Graphical User Interfaces (GUI) is emphasized in this book to further cut downand bend the famous learning curve in learning R. This book is aimed to help you kick-start with analytics including chapters on data visualization, code examples on web analytics and social media analytics, clustering, regression models, text mining, data mining models and forecasting. The book tries to expose the reader to a breadth of business analytics topics without burying the user in needless depth. The included references and links allow the reader to pursue business analytics topics. This book is aimed at business analysts with basic programming skills for using R for Business Analytics. Note the scope of the book is neither statistical theory nor graduate level research for statistics, but rather it is for business analytics practitioners. Business analytics (BA) refers to the field ofexploration and investigation of data generated by businesses. Business Intelligence (BI) is the seamless dissemination of information through the organization, which primarily involves business metrics both past and current for the use of decision support in businesses. Data Mining (DM) is the process of discovering new patterns from large data using algorithms and statistical methods. To differentiate between the three, BI is mostly current reports, BA is models to predict and strategizeand DM matches patterns in big data. The R statistical software is the fastest growing analytics platform in the world, and is established in both academia and corporations for robustness, reliability and accuracy. The book utilizes Albert Einstein s famous remarks on making things as simple as possible, but no simpler. This book will blow the last remaining doubts in your mind about using R in your business environment. Even non-technical users will enjoy the easy-to-use examples. The interviews with creators and corporate users of R make the book very readable. The author firmly believes Isaac Asimovwas a better writer in spreading science than any textbook or journal author."
This book puts military doctrine into a wider perspective, drawing on military history, philosophy, and political science. Military doctrines are institutional beliefs about what works in war; given the trauma of 9/11 and the ensuing 'War on Terror', serious divergences over what the message of the 'new' military doctrine ought to be were expected around the world. However, such questions are often drowned in ferocious meta-doctrinal disagreements. What is a doctrine, after all? This book provides a theoretical understanding of such questions. Divided into three parts, the author investigates the historical roots of military doctrine and explores its growth and expansion until the present day, and goes on to analyse the main characteristics of a military doctrine. Using a multidisciplinary approach, the book concludes that doctrine can be utilized in three key ways: as a tool of command, as a tool of change, and as a tool of education. This book will be of much interest to students of military studies, civil-military relations, strategic studies, and war studies, as well as to students in professional military education.
Numerical analysis is the study of computation and its accuracy, stability and often its implementation on a computer. This book focuses on the principles of numerical analysis and is intended to equip those readers who use statistics to craft their own software and to understand the advantages and disadvantages of different numerical methods.
S-PLUS is a powerful environment for the statistical and graphical analysis of data. It provides the tools to implement many statistical ideas which have been made possible by the widespread availability of workstations having good graphics and computational capabilities. This book is a guide to using S-PLUS to perform statistical analyses and provides both an introduction to the use of S-PLUS and a course in modern statistical methods. S-PLUS is available for both Windows and UNIX workstations, and both versions are covered in depth. The aim of the book is to show how to use S-PLUS as a powerful and graphical data analysis system. Readers are assumed to have a basic grounding in statistics, and so the book in intended for would-be users of S-PLUS and both students and researchers using statistics. Throughout, the emphasis is on presenting practical problems and full analyses of real data sets. Many of the methods discussed are state-of-the-art approaches to topics such as linear, nonlinear, and smooth regression models, tree-based methods, multivariate analysis and pattern recognition, survival analysis, time series and spatial statistics. Throughout, modern techniques such as robust methods, non-parametric smoothing, and bootstrapping are used where appropriate. This third edition is intended for users of S-PLUS 4.5, 5.0, 2000 or later, although S-PLUS 3.3/4 are also considered. The major change from the second edition is coverage of the current versions of S-PLUS. The material has been extensively rewritten using new examples and the latest computationally intensive methods. The companion volume on S Programming will provide an in-depth guide for those writing software in the S language. The authors have written several software libraries that enhance S-PLUS; these and all the datasets used are available on the Internet in versions for Windows and UNIX. There are extensive on-line complements covering advanced material, user-contributed extensions, further exercises, and new features of S-PLUS as they are introduced. Dr. Venables is now Statistician with CSRIO in Queensland, having been at the Department of Statistics, University of Adelaide, for many years previously. He has given many short courses on S-PLUS in Australia, Europe, and the USA. Professor Ripley holds the Chair of Applied Statistics at the University of Oxford, and is the author of four other books on spatial statistics, simulation, pattern recognition, and neural networks.
This book provides state-of-the-art and interdisciplinary topics on solving matrix eigenvalue problems, particularly by using recent petascale and upcoming post-petascale supercomputers. It gathers selected topics presented at the International Workshops on Eigenvalue Problems: Algorithms; Software and Applications, in Petascale Computing (EPASA2014 and EPASA2015), which brought together leading researchers working on the numerical solution of matrix eigenvalue problems to discuss and exchange ideas - and in so doing helped to create a community for researchers in eigenvalue problems. The topics presented in the book, including novel numerical algorithms, high-performance implementation techniques, software developments and sample applications, will contribute to various fields that involve solving large-scale eigenvalue problems.
This book discusses advanced topics such as R core programing, object oriented R programing, parallel computing with R, and spatial data types. The author leads readers to merge mature and effective methdologies in traditional programing to R programing. It shows how to interface R with C, Java, and other popular programing laguages and platforms.
This book presents the statistical analysis of compositional data using the log-ratio approach. It includes a wide range of classical and robust statistical methods adapted for compositional data analysis, such as supervised and unsupervised methods like PCA, correlation analysis, classification and regression. In addition, it considers special data structures like high-dimensional compositions and compositional tables. The methodology introduced is also frequently compared to methods which ignore the specific nature of compositional data. It focuses on practical aspects of compositional data analysis rather than on detailed theoretical derivations, thus issues like graphical visualization and preprocessing (treatment of missing values, zeros, outliers and similar artifacts) form an important part of the book. Since it is primarily intended for researchers and students from applied fields like geochemistry, chemometrics, biology and natural sciences, economics, and social sciences, all the proposed methods are accompanied by worked-out examples in R using the package robCompositions.
Without question, statistics is one of the most challenging courses for students in the social and behavioral sciences. Enrolling in their first statistics course, students are often apprehensive or extremely anxious toward the subject matter. And while IBM SPSS is one of the more easy-to-use statistical software programs available, for anxious students who realize they not only have to learn statistics but also new software, the task can seem insurmountable. Keenly aware of students' anxiety with statistics (and the fact that this anxiety can affect performance), Ronald D. Yockey has written SPSS Demystified: A Simple Guide and Reference, now in its fourth edition. Through a comprehensive, step-by-step approach, this text is consistently and specifically designed to both alleviate anxiety toward the subject matter and build a successful experience analyzing data in SPSS. Topics covered in the text are appropriate for most introductory and intermediate statistics and research methods courses. Key features of the text: Step-by-step instruction and screenshots Designed to be hands-on with the user performing the analyses alongside on their computer as they read through each chapter Call-out boxes provided, highlighting important information as appropriate SPSS output explained, with written results provided using the popular, widely recognized APA format End-of-chapter exercises included, allowing for additional practice SPSS datasets available on the publisher's website New to the Fourth Edition: Fully updated to SPSS 28 Updated screenshots in full color to reflect changes in SPSS software system (version 28) Exercises updated with up-to-date examples Exact p-values provided (consist with APA recommendations)
This book is designed primarily for upper level undergraduate and graduate level students taking a course in multilevel modelling and/or statistical modelling with a large multilevel modelling component. The focus is on presenting the theory and practice of major multilevel modelling techniques in a variety of contexts, using Mplus as the software tool, and demonstrating the various functions available for these analyses in Mplus, which is widely used by researchers in various fields, including most of the social sciences. In particular, Mplus offers users a wide array of tools for latent variable modelling, including for multilevel data.
Multivariate Survival Analysis and Competing Risks introduces univariate survival analysis and extends it to the multivariate case. It covers competing risks and counting processes and provides many real-world examples, exercises, and R code. The text discusses survival data, survival distributions, frailty models, parametric methods, multivariate data and distributions, copulas, continuous failure, parametric likelihood inference, and non- and semi-parametric methods. There are many books covering survival analysis, but very few that cover the multivariate case in any depth. Written for a graduate-level audience in statistics/biostatistics, this book includes practical exercises and R code for the examples. The author is renowned for his clear writing style, and this book continues that trend. It is an excellent reference for graduate students and researchers looking for grounding in this burgeoning field of research.
Group method of data handling (GMDH) is a typical inductive modeling method built on the principles of self-organization. Since its introduction, inductive modelling has been developed to support complex systems in prediction, clusterization, system identification, as well as data mining and knowledge extraction technologies in social science, science, engineering, and medicine.This is the first book to explore GMDH using MATLAB (matrix laboratory) language. Readers will learn how to implement GMDH in MATLAB as a method of dealing with big data analytics. Error-free source codes in MATLAB have been included in supplementary material (accessible online) to assist users in their understanding in GMDH and to make it easy for users to further develop variations of GMDH algorithms.
The International Federation for Information Processing, IFIP, is a multinational federation of professional technical organisations concerned with information processing. IFIP is dedicated to improving communication and increased understanding among practitioners of all nations about the role information processing can play in all walks of life. This Working Conference, Secondary School Mathematics in the World of Communication Technologies: Learning, Teaching and the Curriculum, was organised by Working Group 3.1, Informatics in Secondary Education, ofiFIP Technical Committee for Education, TC3. This is the third conference on this theme organised by WG 3.1, the previous two were held in Varna, Bulgaria, 1977, and Sofia, Bulgaria, 1987-proceedings published by North-Holland Elsevier. The aim of the conference was to take a forward look at the issue of the relationships between mathematics and the new technologies of information and communication in the context of the increased availability of interactive and dynamic information processing tools. The main focus was on the mathematics education of students in the age range of about ll to 18 years and the following themes were addressed: * Curriculum: curriculum evolution; relationships with informatics; * Teachers: professional development; methodology and practice; * Learners: tools and techniques; concept development; research and theory; * Human and social issues: culture and policy; personal impact.
This book provides a general introduction to the R Commander graphical user interface (GUI) to R for readers who are unfamiliar with R. It is suitable for use as a supplementary text in a basic or intermediate-level statistics course. It is not intended to replace a basic or other statistics text but rather to complement it, although it does promote sound statistical practice in the examples. The book should also be useful to individual casual or occasional users of R for whom the standard command-line interface is an obstacle.
Explore the inner workings of environmental processes using a mathematical approach. Environmental Systems Analysis with MATLAB (R) combines environmental science concepts and system theory with numerical techniques to provide a better understanding of how our environment works. The book focuses on building mathematical models of environmental systems, and using these models to analyze their behaviors. Designed with the environmental professional in mind, it offers a practical introduction to developing the skills required for managing environmental modeling and data handling. The book follows a logical sequence from the basic steps of model building and data analysis to implementing these concepts into working computer codes, and then on to assessing their results. It describes data processing (rarely considered in environmental analysis); outlines the tools needed to successfully analyze data and develop models, and moves on to real-world problems. The author illustrates in the first four chapters the methodological aspects of environmental systems analysis, and in subsequent chapters applies them to specific environmental concerns. The accompanying software bundle is freely downloadable from the book web site. It follows the chapters sequence and provides a hands-on experience, allowing the reader to reproduce the figures in the text and experiment by varying the problem setting. A basic MATLAB literacy is required to get the most out of the software. Ideal for coursework and self-study, this offering: Deals with the basic concepts of environmental modeling and identification, both from the mechanistic and the data-driven viewpoint Provides a unifying methodological approach to deal with specific aspects of environmental modeling: population dynamics, flow systems, and environmental microbiology Assesses the similarities and the differences of microbial processes in natural and man-made environments Analyzes several aquatic ecosystems' case studies Presents an application of an extended Streeter & Phelps (S&P) model Describes an ecological method to estimate the bioavailable nutrients in natural waters Considers a lagoon ecosystem from several viewpoints, including modeling and management, and more
* Targests readers with a background in programming, interested in an introduction/refresher in statistical hypothesis testing * Uses Python throughout * Provides the reader with the opportunity of using the book whenever needed rather than following a sequential path.
Statistical Analysis of Questionnaires: A Unified Approach Based on R and Stata presents special statistical methods for analyzing data collected by questionnaires. The book takes an applied approach to testing and measurement tasks, mirroring the growing use of statistical methods and software in education, psychology, sociology, and other fields. It is suitable for graduate students in applied statistics and psychometrics and practitioners in education, health, and marketing. The book covers the foundations of classical test theory (CTT), test reliability, validity, and scaling as well as item response theory (IRT) fundamentals and IRT for dichotomous and polytomous items. The authors explore the latest IRT extensions, such as IRT models with covariates, multidimensional IRT models, IRT models for hierarchical and longitudinal data, and latent class IRT models. They also describe estimation methods and diagnostics, including graphical diagnostic tools, parametric and nonparametric tests, and differential item functioning. Stata and R software codes are included for each method. To enhance comprehension, the book employs real datasets in the examples and illustrates the software outputs in detail. The datasets are available on the authors' web page.
With the advancement of statistical methodology inextricably linked to the use of computers, new methodological ideas must be translated into usable code and then numerically evaluated relative to competing procedures. In response to this, Statistical Computing in C++ and R concentrates on the writing of code rather than the development and study of numerical algorithms per se. The book discusses code development in C++ and R and the use of these symbiotic languages in unison. It emphasizes that each offers distinct features that, when used in tandem, can take code writing beyond what can be obtained from either language alone. The text begins with some basics of object-oriented languages, followed by a "boot-camp" on the use of C++ and R. The authors then discuss code development for the solution of specific computational problems that are relevant to statistics including optimization, numerical linear algebra, and random number generation. Later chapters introduce abstract data structures (ADTs) and parallel computing concepts. The appendices cover R and UNIX Shell programming. Features Includes numerous student exercises ranging from elementary to challenging Integrates both C++ and R for the solution of statistical computing problems Uses C++ code in R and R functions in C++ programs Provides downloadable programs, available from the authors' website The translation of a mathematical problem into its computational analog (or analogs) is a skill that must be learned, like any other, by actively solving relevant problems. The text reveals the basic principles of algorithmic thinking essential to the modern statistician as well as the fundamental skill of communicating with a computer through the use of the computer languages C++ and R. The book lays the foundation for original code development in a research environment.
There is no shortage of incentives to study and reduce poverty in our societies. Poverty is studied in economics and political sciences, and population surveys are an important source of information about it. The design and analysis of such surveys is principally a statistical subject matter and the computer is essential for their data compilation and processing. Focusing on The European Union Statistics on Income and Living Conditions (EU-SILC), a program of annual national surveys which collect data related to poverty and social exclusion, Statistical Studies of Income, Poverty and Inequality in Europe: Computing and Graphics in R presents a set of statistical analyses pertinent to the general goals of EU-SILC. The contents of the volume are biased toward computing and statistics, with reduced attention to economics, political and other social sciences. The emphasis is on methods and procedures as opposed to results, because the data from annual surveys made available since publication and in the near future will degrade the novelty of the data used and the results derived in this volume. The aim of this volume is not to propose specific methods of analysis, but to open up the analytical agenda and address the aspects of the key definitions in the subject of poverty assessment that entail nontrivial elements of arbitrariness. The presented methods do not exhaust the range of analyses suitable for EU-SILC, but will stimulate the search for new methods and adaptation of established methods that cater to the identified purposes.
This book collects peer-reviewed contributions on modern statistical methods and topics, stemming from the third workshop on Analytical Methods in Statistics, AMISTAT 2019, held in Liberec, Czech Republic, on September 16-19, 2019. Real-life problems demand statistical solutions, which in turn require new and profound mathematical methods. As such, the book is not only a collection of solved problems but also a source of new methods and their practical extensions. The authoritative contributions focus on analytical methods in statistics, asymptotics, estimation and Fisher information, robustness, stochastic models and inequalities, and other related fields; further, they address e.g. average autoregression quantiles, neural networks, weighted empirical minimum distance estimators, implied volatility surface estimation, the Grenander estimator, non-Gaussian component analysis, meta learning, and high-dimensional errors-in-variables models.
The advent of fast and sophisticated computer graphics has brought dynamic and interactive images under the control of professional mathematicians and mathematics teachers. This volume in the NATO Special Programme on Advanced Educational Technology takes a comprehensive and critical look at how the computer can support the use of visual images in mathematical problem solving. The contributions are written by researchers and teachers from a variety of disciplines including computer science, mathematics, mathematics education, psychology, and design. Some focus on the use of external visual images and others on the development of individual mental imagery. The book is the first collected volume in a research area that is developing rapidly, and the authors pose some challenging new questions.
This book provides a comprehensive and concrete illustration of time series analysis focusing on the state-space model, which has recently attracted increasing attention in a broad range of fields. The major feature of the book lies in its consistent Bayesian treatment regarding whole combinations of batch and sequential solutions for linear Gaussian and general state-space models: MCMC and Kalman/particle filter. The reader is given insight on flexible modeling in modern time series analysis. The main topics of the book deal with the state-space model, covering extensively, from introductory and exploratory methods to the latest advanced topics such as real-time structural change detection. Additionally, a practical exercise using R/Stan based on real data promotes understanding and enhances the reader's analytical capability. |
You may like...
Adult Piano Adventures - Classics Book 1…
Nancy Faber, Randall Faber
Paperback
R296
Discovery Miles 2 960
Alfred'S Basic All in One Sacred Course…
Willard A Palmer, Morton Manus, …
Paperback
(1)R237 Discovery Miles 2 370
|