Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
Introduction to Real World Statistics provides students with the basic concepts and practices of applied statistics, including data management and preparation; an introduction to the concept of probability; data screening and descriptive statistics; various inferential analysis techniques; and a series of exercises that are designed to integrate core statistical concepts. The author's systematic approach, which assumes no prior knowledge of the subject, equips student practitioners with a fundamental understanding of applied statistics that can be deployed across a wide variety of disciplines and professions. Notable features include: short, digestible chapters that build and integrate statistical skills with real-world applications, demonstrating the flexible usage of statistics for evidence-based decision-making statistical procedures presented in a practical context with less emphasis on technical jargon early chapters that build a foundation before presenting statistical procedures SPSS step-by-step detailed instructions designed to reinforce student understanding real world exercises complete with answers chapter PowerPoints and test banks for instructors.
Sharpening Your Advanced SAS (R) Skills presents sophisticated SAS programming techniques, procedures, and tools, such as Proc SQL, hash tables, and SAS Macro programming, for any industry. Drawing on his more than 20 years' experience of SAS programming in the pharmaceutical industry, the author provides a unique approach that empowers both advanced programmers who need a quick refresher and programmers interested in learning new techniques. The book helps you easily search for key points by summarizing and differentiating the syntax between similar SAS statements and options. Each chapter begins with an overview so you can quickly locate the detailed examples and syntax. The basic syntax, expected data, and descriptions are organized in summary tables to facilitate better memory recall. General rules list common points about similar statements or options. Real-world examples of SAS programs and code statements are line numbered with references, such as SAS papers and websites, for more detailed explanations. The text also includes end-of-chapter questions to reinforce your knowledge of the topics and prepare you for the advanced SAS certification exam. In addition, the author's website offers mindmaps and process flowcharts that connect concepts and relationships.
A friendly, straightforward guide that does not assume knowledge of programming, this book helps new R users hit the ground running. Eric L. Einspruch provides an overview of the software and shows how to download and install R, RStudio, and R packages. Featuring example code, screenshots, tips, learning exercises, and worked-through examples of statistical techniques, the book demonstrates the capabilities and nuances of these powerful free statistical analysis and data visualization tools. Fundamental aspects of data wrangling, analysis, visualization, and reporting are introduced, using both Base R and Tidyverse approaches. Einspruch emphasizes processes that support research reproducibility, such as use of comments to document R code and use of R Markdown capabilities. The book also helps readers navigate the vast array of R resources available to further develop their skills.
Machine learning methods are now an important tool for scientists, researchers, engineers and students in a wide range of areas. This book is written for people who want to adopt and use the main tools of machine learning, but aren't necessarily going to want to be machine learning researchers. Intended for students in final year undergraduate or first year graduate computer science programs in machine learning, this textbook is a machine learning toolkit. Applied Machine Learning covers many topics for people who want to use machine learning processes to get things done, with a strong emphasis on using existing tools and packages, rather than writing one's own code. A companion to the author's Probability and Statistics for Computer Science, this book picks up where the earlier book left off (but also supplies a summary of probability that the reader can use). Emphasizing the usefulness of standard machinery from applied statistics, this textbook gives an overview of the major applied areas in learning, including coverage of:* classification using standard machinery (naive bayes; nearest neighbor; SVM)* clustering and vector quantization (largely as in PSCS)* PCA (largely as in PSCS)* variants of PCA (NIPALS; latent semantic analysis; canonical correlation analysis)* linear regression (largely as in PSCS)* generalized linear models including logistic regression* model selection with Lasso, elasticnet* robustness and m-estimators* Markov chains and HMM's (largely as in PSCS)* EM in fairly gory detail; long experience teaching this suggests one detailed example is required, which students hate; but once they've been through that, the next one is easy* simple graphical models (in the variational inference section)* classification with neural networks, with a particular emphasis onimage classification* autoencoding with neural networks* structure learning
This is a new edition of the accessible and student-friendly 'how to' for anyone using R for the first time, for use in spatial statistical analysis, geocomputation and digital mapping. The authors, once again, take readers from 'zero to hero', updating the now standard text to further enable practical R applications in GIS, spatial analyses, spatial statistics, web-scraping and more. Revised and updated, each chapter includes: example data and commands to explore hands-on; scripts and coding to exemplify specific functionality; self-contained exercises for students to work through; embedded code within the descriptive text. The new edition includes detailed discussion of new and emerging packages within R like sf, ggplot, tmap, making it the go to introduction for all researchers collecting and using data with location attached. This is the introduction to the use of R for spatial statistical analysis, geocomputation, and GIS for all researchers - regardless of discipline - collecting and using data with location attached.
This book focuses is on data science. It includes plenty of actual examples of the typical data processing and data presentations required of a professional data scientist. The material will be especially useful to the growing profession of data scientists. As a practitioner, the author brings a practical view on the topic, with a very hands-on oriented presentation that will be particularly useful to other practitioners. The book also concentrates on the current generation of R packages that have added considerable capability to R, including Hadley Wickam's suite of packages, such as tidyr, dplyr, lubridate, stringr, and ggplot2.
"This would be an excellent book for undergraduate, graduate and beyond....The style of writing is easy to read and the author does a good job of adding humor in places. The integration of basic programming in R with the data that is collected for any experiment provides a powerful platform for analysis of data.... having the understanding of data analysis that this book offers will really help researchers examine their data and consider its value from multiple perspectives - and this applies to people who have small AND large data sets alike! This book also helps people use a free and basic software system for processing and plotting simple to complex functions." Michelle Pantoya, Texas Tech University Measurements of quantities that vary in a continuous fashion, e.g., the pressure of a gas, cannot be measured exactly and there will always be some uncertainty with these measured values, so it is vital for researchers to be able to quantify this data. Uncertainty Analysis of Experimental Data with R covers methods for evaluation of uncertainties in experimental data, as well as predictions made using these data, with implementation in R. The books discusses both basic and more complex methods including linear regression, nonlinear regression, and kernel smoothing curve fits, as well as Taylor Series, Monte Carlo and Bayesian approaches. Features: 1. Extensive use of modern open source software (R). 2. Many code examples are provided. 3. The uncertainty analyses conform to accepted professional standards (ASME). 4. The book is self-contained and includes all necessary material including chapters on statistics and programming in R. Benjamin D. Shaw is a professor in the Mechanical and Aerospace Engineering Department at the University of California, Davis. His research interests are primarily in experimental and theoretical aspects of combustion. Along with other courses, he has taught undergraduate and graduate courses on engineering experimentation and uncertainty analysis. He has published widely in archival journals and became an ASME Fellow in 2003.
Many professional, high-quality surveys collect data on people's behaviour, experiences, lifestyles and attitudes. The data they produce is more accessible than ever before. This book provides students with a comprehensive introduction to using this data, as well as transactional data and big data sources, in their own research projects. Here you will find all you need to know about locating, accessing, preparing and analysing secondary data, along with step-by-step instructions for using IBM SPSS Statistics. You will learn how to: Create a robust research question and design that suits secondary analysis Locate, access and explore data online Understand data documentation Check and 'clean' secondary data Manage and analyse your data to produce meaningful results Replicate analyses of data in published articles and books Using case studies and video animations to illustrate each step of your research, this book provides you with the quantitative analysis skills you'll need to pass your course, complete your research project and compete in the job market. Exercises throughout the book and on the book's companion website give you an opportunity to practice, check your understanding and work hands on with real data as you're learning.
Financial Econometrics Using Stata is an essential reference for graduate students, researchers, and practitioners who use Stata to perform intermediate or advanced methods. After discussing the characteristics of financial time series, the authors provide introductions to ARMA models, univariate GARCH models, multivariate GARCH models, and applications of these models to financial time series. The last two chapters cover risk management and contagion measures. After a rigorous but intuitive overview, the authors illustrate each method by interpreting easily replicable Stata examples.
There is no shortage of incentives to study and reduce poverty in our societies. Poverty is studied in economics and political sciences, and population surveys are an important source of information about it. The design and analysis of such surveys is principally a statistical subject matter and the computer is essential for their data compilation and processing. Focusing on The European Union Statistics on Income and Living Conditions (EU-SILC), a program of annual national surveys which collect data related to poverty and social exclusion, Statistical Studies of Income, Poverty and Inequality in Europe: Computing and Graphics in R presents a set of statistical analyses pertinent to the general goals of EU-SILC. The contents of the volume are biased toward computing and statistics, with reduced attention to economics, political and other social sciences. The emphasis is on methods and procedures as opposed to results, because the data from annual surveys made available since publication and in the near future will degrade the novelty of the data used and the results derived in this volume. The aim of this volume is not to propose specific methods of analysis, but to open up the analytical agenda and address the aspects of the key definitions in the subject of poverty assessment that entail nontrivial elements of arbitrariness. The presented methods do not exhaust the range of analyses suitable for EU-SILC, but will stimulate the search for new methods and adaptation of established methods that cater to the identified purposes.
In computational science, reproducibility requires that researchers make code and data available to others so that the data can be analyzed in a similar manner as in the original publication. Code must be available to be distributed, data must be accessible in a readable format, and a platform must be available for widely distributing the data and code. In addition, both data and code need to be licensed permissively enough so that others can reproduce the work without a substantial legal burden. Implementing Reproducible Research covers many of the elements necessary for conducting and distributing reproducible research. It explains how to accurately reproduce a scientific result. Divided into three parts, the book discusses the tools, practices, and dissemination platforms for ensuring reproducibility in computational science. It describes: Computational tools, such as Sweave, knitr, VisTrails, Sumatra, CDE, and the Declaratron system Open source practices, good programming practices, trends in open science, and the role of cloud computing in reproducible research Software and methodological platforms, including open source software packages, RunMyCode platform, and open access journals Each part presents contributions from leaders who have developed software and other products that have advanced the field. Supplementary material is available at www.ImplementingRR.org.
Click on the Supplements tab above for further details on the different versions of SPSS programs.Making statistics-and statistical software-accessible and rewarding This book provides readers with step-by-step guidance on running a wide variety of statistical analyses in IBM(R) SPSS(R) Statistics, Stata, and other programs. Author David Kremelberg begins his user-friendly text by covering charts and graphs through regression, time-series analysis, and factor analysis. He provides a background of the method, then explains how to run these tests in IBM SPSS and Stata. He then progresses to more advanced kinds of statistics such as HLM and SEM, where he describes the tests and explains how to run these tests in their appropriate software including HLM and AMOS. This is an invaluable guide for upper-level undergraduate and graduate students across the social and behavioral sciences who need assistance in understanding the various statistical packages.
Introductory Statistics for Health & Nursing using SPSS is an impressive introductory statistics text ideal for all health science and nursing students. Health and nursing students can be anxious and lacking in confidence when it comes to handling statistics. This book has been developed with this readership in mind. This accessible text eschews long and off-putting statistical formulae in favour of non-daunting practical and SPSS-based examples. What's more, its content will fit ideally with the common course content of stats courses in the field. Introductory Statistics for Health & Nursing using SPSS is also accompanied by a companion website containing data-sets and examples for use by lecturers with their students. The inclusion of real-world data and a host of health-related examples should make this an ideal core text for any introductory statistics course in the field.
SPSS syntax is the command language used by SPSS to carry out all of its commands and functions. In this book, Jacqueline Collier introduces the use of syntax to those who have not used it before, or who are taking their first steps in using syntax. Without requiring any knowledge of programming, the text outlines: - how to become familiar with the syntax commands; - how to create and manage the SPSS journal and syntax files; - and how to use them throughout the data entry, management and analysis process. Collier covers all aspects of data management from data entry through to data analysis, including managing the errors and the error messages created by SPSS. Syntax commands are clearly explained and the value of syntax is demonstrated through examples. This book also supports the use of SPSS syntax alongside the usual button and menu-driven graphical interface (GIF) using the two methods together, in a complementary way. The book is written in such a way as to enable you to pick and choose how much you rely on one method over the other, encouraging you to use them side-by-side, with a gradual increase in use of syntax as your knowledge, skills and confidence develop. This book is ideal for all those carrying out quantitative research in the health and social sciences who can benefit from SPSS syntax's capacity to save time, reduce errors and allow a data audit trail.
Accessibly written and easy to use, Applied Statistics Using SPSS is an all-in-one self-study guide to SPSS and do-it-yourself guide to statistics. Based around the needs of undergraduate students embarking on their own research project, the text's self-help style is designed to boost the skills and confidence of those that will need to use SPSS in the course of doing their research project. The book is pedagogically well developed and contains many screen dumps and exercises, glossary terms and worked examples. Divided into two parts, Applied Statistics Using SPSS covers : 1. A self-study guide for learning how to use SPSS. 2. A reference guide for selecting the appropriate statistical technique and a stepwise do-it-yourself guide for analysing data and interpreting the results. 3. Readers of the book can download the SPSS data file that is used for most of the examples throughout the book. Geared explicitly for undergraduate needs, this is an easy to follow SPSS book that should provide a step-by-step guide to research design and data analysis using SPSS.
SPSS for Windows is the most widely used computer package for analyzing quantitative data. In a clear, readable, non-technical style, this book teaches beginners how to use the program, input and manipulate data, use descriptive analyses and inferential techniques, including: t-tests, analysis of variance, correlation and regression, nonparametric techniques, and reliability analysis and factor analysis. The author provides an overview of statistical analysis, and then shows in a simple step-by-step method how to set up an SPSS file in order to run an analysis as well as how to graph and display data. He explains how to use SPSS for all the main statistical approaches you would expect to find in an introductory statistics course. The book is written for users of Versions 6 and 6.1, but will be equally valuable to users of later versions.
Ideal for those already familiar with basic Excel features, this updated Third Edition of Neil J. Salkind's Excel Statistics: A Quick Guide shows readers how to utilize Microsoft (R) Excel's functions and Analysis ToolPak to answer simple and complex questions about data. Part I explores 35 Excel functions, while Part II contains 20 Analysis ToolPak tools. To make it easy to see what each function or tool looks like when applied, at-a-glance two-page spreads describe each function and its use with corresponding screenshots. In addition, actual data files used in the examples are readily available online at an open-access Student Study Site.
A new and refreshingly different approach to presenting the foundations of statistical algorithms, Foundations of Statistical Algorithms: With References to R Packages reviews the historical development of basic algorithms to illuminate the evolution of today's more powerful statistical algorithms. It emphasizes recurring themes in all statistical algorithms, including computation, assessment and verification, iteration, intuition, randomness, repetition and parallelization, and scalability. Unique in scope, the book reviews the upcoming challenge of scaling many of the established techniques to very large data sets and delves into systematic verification by demonstrating how to derive general classes of worst case inputs and emphasizing the importance of testing over a large number of different inputs. Broadly accessible, the book offers examples, exercises, and selected solutions in each chapter as well as access to a supplementary website. After working through the material covered in the book, readers should not only understand current algorithms but also gain a deeper understanding of how algorithms are constructed, how to evaluate new algorithms, which recurring principles are used to tackle some of the tough problems statistical programmers face, and how to take an idea for a new method and turn it into something practically useful.
Since the first edition of this book was published, S-PLUS has evolved markedly with new methods of analysis, new graphical procedures, and a convenient graphical user interface (GUI). Today, S-PLUS is the statistical software of choice for many applied researchers in disciplines ranging from finance to medicine. Combining the command line language and GUI of S-PLUS now makes this book even more suitable for inexperienced users, students, and anyone without the time, patience, or background needed to wade through the many more advanced manuals and texts on the market.
Sage est un logiciel libre de calcul mathematique s'appuyant sur le langage de programmation Python. Ses auteurs, une communaute internationale de centaines d'enseignants et de chercheurs, se sont donne pour mission de fournir une alternative viable aux logiciels Magma, Maple, Mathematica et Matlab. Sage fait appel pour cela a de multiples logiciels libres existants, comme GAP, Maxima, PARI et diverses bibliotheques scientifiques pour Python, auxquels il ajoute des milliers de nouvelles fonctions. Il est disponible gratuitement et fonctionne sur les systemes d'exploitation usuels. Pour les lyceens, Sage est une formidable calculatrice scientifique et graphique. Il assiste efficacement l'etudiant de premier cycle universitaire dans ses calculs en analyse, en algebre lineaire, etc. Pour la suite du parcours universitaire, ainsi que pour les chercheurs et les ingenieurs, Sage propose les algorithmes les plus recents dans diverses branches des mathematiques. De ce fait, de nombreuses universites enseignent Sage des le premier cycle pour les travaux pratiques et les projets. Ce livre est le premier ouvrage generaliste sur Sage, toutes langues confondues. Coecrit par des enseignants et chercheurs intervenant a tous les niveaux (IUT, classes preparatoires, licence, master, doctorat), il met l'accent sur les mathematiques sous-jacentes a une bonne comprehension du logiciel. En cela, il correspond plus a un cours de mathematiques effectives illustre par des exemples avec Sage qu'a un mode d'emploi ou un manuel de reference. La premiere partie est accessible aux eleves de licence. Le contenu des parties suivantes s'inspire du programme de l'epreuve de modelisation de l'agregation de mathematiques. Ce livre est diffuse sous licence libre Creative Commons. Il peut etre telecharge gratuitement depuis http: //sagebook.gforge.inria.fr/.
In this second edition of An Introduction to Stata Programming, the author introduces concepts by providing the background and importance for the topic, presents common uses and examples, then concludes with larger, more applied examples referred to as "cookbook recipes." This is a great reference for anyone who wants to learn Stata programming. For those learning, the author assumes familiarity with Stata and gradually introduces more advanced programming tools. For the more advanced Stata programmer, the book introduces Stata's Mata programming language and optimization routines.
The Statistical Imagination, a basic social science statistics text with illustrations and exercises for sociology, social work, political science, and criminal justice courses, teaches readers that statistics is not just a mathematical exercise; it is a way of analyzing and understanding the social world. Praised for a writing style that takes the anxiety out of statistics courses, the author explains basic statistical principles through a variety of engaging exercises, each designed to illuminate the unique theme of examining society both creatively and logically. In an effort to make the study of statistics relevant to students of the social sciences, the author encourages readers to interpret the results of calculations in the context of more substantive social issues, while continuing to value precise and accurate research. Ritchey begins by introducing students to the essentials of learning statistics; fractions, proportions, percentages, standard deviation, sampling error and sampling distribution, along with other math hurdles, are clearly explained to fill in any math gaps students may bring to the classroom. Treating statistics as a skill learned best by doing, the author supplies a range of student-friendly questions and exercises to both demystify the calculation process, and to encourage the kind of proportional thinking needed to master the subject. In addition to pencil-and-paper exercises, The Statistical Imagination includes computer-based assignments for use with the free Student Version SPSS 9.0 CD-ROM that accompanies each new copy of the book.
This book highlights recent advances in natural computing, including biology and its theory, bio-inspired computing, computational aesthetics, computational models and theories, computing with natural media, philosophy of natural computing, and educational technology. It presents extended versions of the best papers selected from the "8th International Workshop on Natural Computing" (IWNC8), a symposium held in Hiroshima, Japan, in 2014. The target audience is not limited to researchers working in natural computing but also includes those active in biological engineering, fine/media art design, aesthetics, and philosophy.
Program for data analysis using R and learn practical skills to make your work more efficient. This book covers how to automate running code and the creation of reports to share your results, as well as writing functions and packages. Advanced R is not designed to teach advanced R programming nor to teach the theory behind statistical procedures. Rather, it is designed to be a practical guide moving beyond merely using R to programming in R to automate tasks. This book will show you how to manipulate data in modern R structures and includes connecting R to data bases such as SQLite, PostgeSQL, and MongoDB. The book closes with a hands-on section to get R running in the cloud. Each chapter also includes a detailed bibliography with references to research articles and other resources that cover relevant conceptual and theoretical topics. What You Will Learn Write and document R functions Make an R package and share it via GitHub or privately Add tests to R code to insure it works as intended Build packages automatically with GitHub Use R to talk directly to databases and do complex data management Run R in the Amazon cloud Generate presentation-ready tables and reports using R Who This Book Is For Working professionals, researchers, or students who are familiar with R and basic statistical techniques such as linear regression and who want to learn how to take their R coding and programming to the next level. |
You may like...
Spatial Regression Analysis Using…
Daniel A. Griffith, Yongwan Chun, …
Paperback
R3,049
Discovery Miles 30 490
Mathematical Modeling for Smart…
Debabrata Samanta, Debabrata Singh
Hardcover
R12,404
Discovery Miles 124 040
Essential Java for Scientists and…
Brian Hahn, Katherine Malan
Paperback
R1,268
Discovery Miles 12 680
Jump into JMP Scripting, Second Edition…
Wendy Murphrey, Rosemary Lucas
Hardcover
R1,613
Discovery Miles 16 130
|