Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
IBM SPSS Statistics 27 Step by Step: A Simple Guide and Reference, seventeenth edition, takes a straightforward, step-by-step approach that makes SPSS software clear to beginners and experienced researchers alike. Extensive use of four-color screen shots, clear writing, and step-by-step boxes guide readers through the program. Output for each procedure is explained and illustrated, and every output term is defined. Exercises at the end of each chapter support students by providing additional opportunities to practice using SPSS. This book covers the basics of statistical analysis and addresses more advanced topics such as multidimensional scaling, factor analysis, discriminant analysis, measures of internal consistency, MANOVA (between- and within-subjects), cluster analysis, Log-linear models, logistic regression, and a chapter describing residuals. The end sections include a description of data files used in exercises, an exhaustive glossary, suggestions for further reading, and a comprehensive index. IBM SPSS Statistics 27 Step by Step is distributed in 85 countries, has been an academic best seller through most of the earlier editions, and has proved an invaluable aid to thousands of researchers and students. New to this edition: Screenshots, explanations, and step-by-step boxes have been fully updated to reflect SPSS 27 A new chapter on a priori power analysis helps researchers determine the sample size needed for their research before starting data collection.
This volume conveys some of the surprises, puzzles and success stories in high-dimensional and complex data analysis and related fields. Its peer-reviewed contributions showcase recent advances in variable selection, estimation and prediction strategies for a host of useful models, as well as essential new developments in the field. The continued and rapid advancement of modern technology now allows scientists to collect data of increasingly unprecedented size and complexity. Examples include epigenomic data, genomic data, proteomic data, high-resolution image data, high-frequency financial data, functional and longitudinal data, and network data. Simultaneous variable selection and estimation is one of the key statistical problems involved in analyzing such big and complex data. The purpose of this book is to stimulate research and foster interaction between researchers in the area of high-dimensional data analysis. More concretely, its goals are to: 1) highlight and expand the breadth of existing methods in big data and high-dimensional data analysis and their potential for the advancement of both the mathematical and statistical sciences; 2) identify important directions for future research in the theory of regularization methods, in algorithmic development, and in methodologies for different application areas; and 3) facilitate collaboration between theoretical and subject-specific researchers.
Pulsar timing is a promising method for detecting gravitational waves in the nano-Hertz band. In his prize winning Ph.D. thesis Rutger van Haasteren deals with how one takes thousands of seemingly random timing residuals which are measured by pulsar observers, and extracts information about the presence and character of the gravitational waves in the nano-Hertz band that are washing over our Galaxy. The author presents a sophisticated mathematical algorithm that deals with this issue. His algorithm is probably the most well-developed of those that are currently in use in the Pulsar Timing Array community. In chapter 3, the gravitational-wave memory effect is described. This is one of the first descriptions of this interesting effect in relation with pulsar timing, which may become observable in future Pulsar Timing Array projects. The last part of the work is dedicated to an effort to combine the European pulsar timing data sets in order to search for gravitational waves. This study has placed the most stringent limit to date on the intensity of gravitational waves that are produced by pairs of supermassive black holes dancing around each other in distant galaxies, as well as those that may be produced by vibrating cosmic strings. Rutger van Haasteren has won the 2011 GWIC Thesis Prize of the Gravitational Wave International Community for his innovative work in various directions of the search for gravitational waves by pulsar timing. The work is presented in this Ph.D. thesis.
This book illustrates the potential for computer simulation in the study of modern slavery and worker abuse, and by extension in all social issues. It lays out a philosophy of how agent-based modelling can be used in the social sciences. In addressing modern slavery, Chesney considers precarious work that is vulnerable to abuse, like sweat-shop labour and prostitution, and shows how agent modelling can be used to study, understand and fight abuse in these areas. He explores the philosophy, application and practice of agent modelling through the popular and free software NetLogo. This topical book is grounded in the technology needed to address the messy, chaotic, real world problems that humanity faces-in this case the serious problem of abuse at work-but equally in the social sciences which are needed to avoid the unintended consequences inherent to human responses. It includes a short but extensive NetLogo guide which readers can use to quickly learn this software and go on to develop complex models. This is an important book for students and researchers of computational social science and others interested in agent-based modelling.
This book on statistical disclosure control presents the theory, applications and software implementation of the traditional approach to (micro)data anonymization, including data perturbation methods, disclosure risk, data utility, information loss and methods for simulating synthetic data. Introducing readers to the R packages sdcMicro and simPop, the book also features numerous examples and exercises with solutions, as well as case studies with real-world data, accompanied by the underlying R code to allow readers to reproduce all results. The demand for and volume of data from surveys, registers or other sources containing sensible information on persons or enterprises have increased significantly over the last several years. At the same time, privacy protection principles and regulations have imposed restrictions on the access and use of individual data. Proper and secure microdata dissemination calls for the application of statistical disclosure control methods to the da ta before release. This book is intended for practitioners at statistical agencies and other national and international organizations that deal with confidential data. It will also be interesting for researchers working in statistical disclosure control and the health sciences.
This book features selected papers presented at the 2nd International Conference on Advanced Computing Technologies and Applications, held at SVKM's Dwarkadas J. Sanghvi College of Engineering, Mumbai, India, from 28 to 29 February 2020. Covering recent advances in next-generation computing, the book focuses on recent developments in intelligent computing, such as linguistic computing, statistical computing, data computing and ambient applications.
Linear mixed-effects models (LMMs) are an important class of statistical models that can be used to analyze correlated data. Such data are encountered in a variety of fields including biostatistics, public health, psychometrics, educational measurement, and sociology. This book aims to support a wide range of uses for the models by applied researchers in those and other fields by providing state-of-the-art descriptions of the implementation of LMMs in R. To help readers to get familiar with the features of the models and the details of carrying them out in R, the book includes a review of the most important theoretical concepts of the models. The presentation connects theory, software and applications. It is built up incrementally, starting with a summary of the concepts underlying simpler classes of linear models like the classical regression model, and carrying them forward to LMMs. A similar step-by-step approach is used to describe the R tools for LMMs. All the classes of linear models presented in the book are illustrated using real-life data. The book also introduces several novel R tools for LMMs, including new class of variance-covariance structure for random-effects, methods for influence diagnostics and for power calculations. They are included into an R package that should assist the readers in applying these and other methods presented in this text.
The basics of computer algebra and the language of Mathematica are described. This title will lead toward an understanding of Mathematica that allows the reader to solve problems in physics, mathematics, and chemistry. Mathematica is the most widely used system for doing mathematical calculations by computer, including symbolic and numeric calculations and graphics. It is used in physics and other branches of science, in mathematics, education and many other areas. Many important results in physics would never be obtained without a wide use of computer algebra.
The papers in this volume represent the most timely and advanced contributions to the 2014 Joint Applied Statistics Symposium of the International Chinese Statistical Association (ICSA) and the Korean International Statistical Society (KISS), held in Portland, Oregon. The contributions cover new developments in statistical modeling and clinical research: including model development, model checking, and innovative clinical trial design and analysis. Each paper was peer-reviewed by at least two referees and also by an editor. The conference was attended by over 400 participants from academia, industry, and government agencies around the world, including from North America, Asia, and Europe. It offered 3 keynote speeches, 7 short courses, 76 parallel scientific sessions, student paper sessions, and social events.
This Festschrift in honour of Ursula Gather's 60th birthday deals with modern topics in the field of robust statistical methods, especially for time series and regression analysis, and with statistical methods for complex data structures. The individual contributions of leading experts provide a textbook-style overview of the topic, supplemented by current research results and questions. The statistical theory and methods in this volume aim at the analysis of data which deviate from classical stringent model assumptions, which contain outlying values and/or have a complex structure. Written for researchers as well as master and PhD students with a good knowledge of statistics.
With the increasing advances in hardware technology for data collection, and advances in software technology (databases) for data organization, computer scientists have increasingly participated in the latest advancements of the outlier analysis field. Computer scientists, specifically, approach this field based on their practical experiences in managing large amounts of data, and with far fewer assumptions- the data can be of any type, structured or unstructured, and may be extremely large. Outlier Analysis is a comprehensive exposition, as understood by data mining experts, statisticians and computer scientists. The book has been organized carefully, and emphasis was placed on simplifying the content, so that students and practitioners can also benefit. Chapters will typically cover one of three areas: methods and techniques commonly used in outlier analysis, such as linear methods, proximity-based methods, subspace methods, and supervised methods; data domains, such as, text, categorical, mixed-attribute, time-series, streaming, discrete sequence, spatial and network data; and key applications of these methods as applied to diverse domains such as credit card fraud detection, intrusion detection, medical diagnosis, earth science, web log analytics, and social network analysis are covered.
"This volume provides essential guidance for transforming
mathematics learning in schools through the use of innovative
technology, pedagogy, and curriculum. It presents clear, rigorous
evidence of the impact technology can have in improving students
learning of important yet complex mathematical concepts -- and goes
beyond a focus on technology alone to clearly explain how teacher
professional development, pedagogy, curriculum, and student
participation and identity each play an essential role in
transforming mathematics classrooms with technology. Further,
evidence of effectiveness is complemented by insightful case
studies of how key factors lead to enhancing learning, including
the contributions of design research, classroom discourse, and
meaningful assessment. "* Engaging students in deeply learning the important concepts
in mathematics "* Engaging students in deeply learning the important concepts
in mathematics
This book discusses a variety of methods for outlier ensembles and organizes them by the specific principles with which accuracy improvements are achieved. In addition, it covers the techniques with which such methods can be made more effective. A formal classification of these methods is provided, and the circumstances in which they work well are examined. The authors cover how outlier ensembles relate (both theoretically and practically) to the ensemble techniques used commonly for other data mining problems like classification. The similarities and (subtle) differences in the ensemble techniques for the classification and outlier detection problems are explored. These subtle differences do impact the design of ensemble algorithms for the latter problem. This book can be used for courses in data mining and related curricula. Many illustrative examples and exercises are provided in order to facilitate classroom teaching. A familiarity is assumed to the outlier detection problem and also to generic problem of ensemble analysis in classification. This is because many of the ensemble methods discussed in this book are adaptations from their counterparts in the classification domain. Some techniques explained in this book, such as wagging, randomized feature weighting, and geometric subsampling, provide new insights that are not available elsewhere. Also included is an analysis of the performance of various types of base detectors and their relative effectiveness. The book is valuable for researchers and practitioners for leveraging ensemble methods into optimal algorithmic design.
This Bayesian modeling book provides a self-contained entry to computational Bayesian statistics. Focusing on the most standard statistical models and backed up by real datasets and an all-inclusive R (CRAN) package called bayess, the book provides an operational methodology for conducting Bayesian inference, rather than focusing on its theoretical and philosophical justifications. Readers are empowered to participate in the real-life data analysis situations depicted here from the beginning. The stakes are high and the reader determines the outcome. Special attention is paid to the derivation of prior distributions in each case and specific reference solutions are given for each of the models. Similarly, computational details are worked out to lead the reader towards an effective programming of the methods given in the book. In particular, all R codes are discussed with enough detail to make them readily understandable and expandable. This works in conjunction with the bayess package. Bayesian Essentials with R can be used as a textbook at both undergraduate and graduate levels, as exemplified by courses given at Universite Paris Dauphine (France), University of Canterbury (New Zealand), and University of British Columbia (Canada). It is particularly useful with students in professional degree programs and scientists to analyze data the Bayesian way. The text will also enhance introductory courses on Bayesian statistics. Prerequisites for the book are an undergraduate background in probability and statistics, if not in Bayesian statistics. A strength of the text is the noteworthy emphasis on the role of models in statistical analysis. This is the new, fully-revised edition to the book Bayesian Core: A Practical Approach to Computational Bayesian Statistics. Jean-Michel Marin is Professor of Statistics at Universite Montpellier 2, France, and Head of the Mathematics and Modelling research unit. He has written over 40 papers on Bayesian methodology and computing, as well as worked closely with population geneticists over the past ten years. Christian Robert is Professor of Statistics at Universite Paris-Dauphine, France. He has written over 150 papers on Bayesian Statistics and computational methods and is the author or co-author of seven books on those topics, including The Bayesian Choice (Springer, 2001), winner of the ISBA DeGroot Prize in 2004. He is a Fellow of the Institute of Mathematical Statistics, the Royal Statistical Society and the American Statistical Society. He has been co-editor of the Journal of the Royal Statistical Society, Series B, and in the editorial boards of the Journal of the American Statistical Society, the Annals of Statistics, Statistical Science, and Bayesian Analysis. He is also a recipient of an Erskine Fellowship from the University of Canterbury (NZ) in 2006 and a senior member of the Institut Universitaire de France (2010-2015)."
This book provides new insights on the study of global environmental changes using the ecoinformatics tools and the adaptive-evolutionary technology of geoinformation monitoring. The main advantage of this book is that it gathers and presents extensive interdisciplinary expertise in the parameterization of global biogeochemical cycles and other environmental processes in the context of globalization and sustainable development. In this regard, the crucial global problems concerning the dynamics of the nature-society system are considered and the key problems of ensuring the system's sustainable development are studied. A new approach to the numerical modeling of the nature-society system is proposed and results are provided on modeling the dynamics of the system's characteristics with regard to scenarios of anthropogenic impacts on biogeochemical cycles, land ecosystems and oceans. The main purpose of this book is to develop a universal guide to information-modeling technologies for assessing the function of environmental subsystems under various climatic and anthropogenic conditions.
The contributions gathered here provide an overview of current research projects and selected software products of the Fraunhofer Institute for Algorithms and Scientific Computing SCAI. They show the wide range of challenges that scientific computing currently faces, the solutions it offers, and its important role in developing applications for industry. Given the exciting field of applied collaborative research and development it discusses, the book will appeal to scientists, practitioners, and students alike. The Fraunhofer Institute for Algorithms and Scientific Computing SCAI combines excellent research and application-oriented development to provide added value for our partners. SCAI develops numerical techniques, parallel algorithms and specialized software tools to support and optimize industrial simulations. Moreover, it implements custom software solutions for production and logistics, and offers calculations on high-performance computers. Its services and products are based on state-of-the-art methods from applied mathematics and information technology.
This book provides a groundbreaking introduction to the likelihood inference for correlated survival data via the hierarchical (or h-) likelihood in order to obtain the (marginal) likelihood and to address the computational difficulties in inferences and extensions. The approach presented in the book overcomes shortcomings in the traditional likelihood-based methods for clustered survival data such as intractable integration. The text includes technical materials such as derivations and proofs in each chapter, as well as recently developed software programs in R ("frailtyHL"), while the real-world data examples together with an R package, "frailtyHL" in CRAN, provide readers with useful hands-on tools. Reviewing new developments since the introduction of the h-likelihood to survival analysis (methods for interval estimation of the individual frailty and for variable selection of the fixed effects in the general class of frailty models) and guiding future directions, the book is of interest to researchers in medical and genetics fields, graduate students, and PhD (bio) statisticians.
This book introduces the ade4 package for R which provides multivariate methods for the analysis of ecological data. It is implemented around the mathematical concept of the duality diagram, and provides a unified framework for multivariate analysis. The authors offer a detailed presentation of the theoretical framework of the duality diagram and also of its application to real-world ecological problems. These two goals may seem contradictory, as they concern two separate groups of scientists, namely statisticians and ecologists. However, statistical ecology has become a scientific discipline of its own, and the good use of multivariate data analysis methods by ecologists implies a fair knowledge of the mathematical properties of these methods. The organization of the book is based on ecological questions, but these questions correspond to particular classes of data analysis methods. The first chapters present both usual and multiway data analysis methods. Further chapters are dedicated for example to the analysis of spatial data, of phylogenetic structures, and of biodiversity patterns. One chapter deals with multivariate data analysis graphs. In each chapter, the basic mathematical definitions of the methods and the outputs of the R functions available in ade4 are detailed in two different boxes. The text of the book itself can be read independently from these boxes. Thus the book offers the opportunity to find information about the ecological situation from which a question raises alongside the mathematical properties of methods that can be applied to answer this question, as well as the details of software outputs. Each example and all the graphs in this book come with executable R code. |
You may like...
Kodaly in the Fourth Grade Classroom…
Micheal Houlahan, Philip Tacka
Hardcover
R3,650
Discovery Miles 36 500
Analytical and Cross-Cultural Studies in…
Michael Tenzer, John Roeder
Hardcover
R2,813
Discovery Miles 28 130
Technique of Orchestration, The…
Kent Kennan, Donald Grantham
Paperback
R2,090
Discovery Miles 20 900
|