![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
IBM SPSS for Introductory Statistics is designed to help students learn how to analyze and interpret research. In easy-to-understand language, the authors show readers how to choose the appropriate statistic based on the design, and to interpret outputs appropriately. There is such a wide variety of options and statistics in SPSS, that knowing which ones to use and how to interpret the outputs can be difficult. This book assists students with these challenges. Comprehensive and user-friendly, the book prepares readers for each step in the research process: design, entering and checking data, testing assumptions, assessing reliability and validity, computing descriptive and inferential parametric and nonparametric statistics, and writing about results. Dialog windows and SPSS syntax, along with the output, are provided. Several realistic data sets, available online, are used to solve the chapter problems. This new edition includes updated screenshots and instructions for IBM SPSS 25, as well as updated pedagogy, such as callout boxes for each chapter indicating crucial elements of APA style and referencing outputs. IBM SPSS for Introductory Statistics is an invaluable supplemental (or lab text) book for students. In addition, this book and its companion, IBM SPSS for Intermediate Statistics, are useful as guides/reminders to faculty and professionals regarding the specific steps to take to use SPSS and/or how to use and interpret parts of SPSS with which they are unfamiliar.
This book covers the needs of scientists - be they mathematicians, physicists, chemists or engineers - in terms of symbolic computation, and allows them to locate quickly, via a detailed table of contents and index, the method they require for the precise problem they are adressing.It requires no prior experience of symbolic computation, nor specialized mathematical knowledge, and provides quick access to the practical use of symbolic computation software. The organization of the book in mutually independent chapters, each focusing on a specific topic, allows the user to select what is of interest without necessarily reading everything.
The most widely used statistical method in seasonal adjustment is without doubt that implemented in the X-11 Variant of the Census Method II Seasonal Adjustment Program. Developed at the US Bureau of the Census in the 1950's and 1960's, this computer program has undergone numerous modifications and improvements, leading especially to the X-11-ARIMA software packages in 1975 and 1988 and X-12-ARIMA, the first beta version of which is dated 1998. While these software packages integrate, to varying degrees, parametric methods, and especially the ARIMA models popularized by Box and Jenkins, they remain in essence very close to the initial X-11 method, and it is this "core" that Seasonal Adjustment with the X-11 Method focuses on. With a Preface by Allan Young, the authors document the seasonal adjustment method implemented in the X-11 based software. It will be an important reference for government agencies, macroeconomists, and other serious users of economic data. After some historical notes, the authors outline the X-11 methodology. One chapter is devoted to the study of moving averages with an emphasis on those used by X-11. Readers will also find a complete example of seasonal adjustment, and have a detailed picture of all the calculations. The linear regression models used for trading-day effects and the process of detecting and correcting extreme values are studied in the example. The estimation of the Easter effect is dealt with in a separate chapter insofar as the models used in X-11-ARIMA and X-12-ARIMA are appreciably different. Dominique Ladiray is an Administrateur at the French Institut National de la Statistique et des Etudes Economiques. He is also a Professor at the Ecole Nationale de la Statistique et de l'Administration Economique, and at the Ecole Nationale de la Statistique et de l'Analyse de l'Information. He currently works on short-term economic analysis. Benoît Quenneville is a methodologist with Statistics Canada Time Series Research and Analysis Centre. He holds a Ph.D. from the University of Western Ontario. His research interests are in time series analysis with an emphasis on official statistics.
This book offers a detailed application guide to XploRe - an interactive statistical computing environment. As a guide it contains case studies of real data analysis situations. It helps the beginner in statistical data analysis to learn how XploRe works in real life applications. Many examples from practice are discussed and analysed in full length. Great emphasis is put on a graphic based understanding of the data interrelations. The case studies include: Survival modelling with Cox's proportional hazard regression, Vitamin C data analysis with Quantile Regression, and many others.
MATLAB , a software package developed by Math Works, Inc. is powerful, versatile and interactive software for scientific and technical computations including simulations. Specialised toolboxes provided with several built-in functions are a special feature of MATLAB .
Get started with an accelerated introduction to the R ecosystem, programming language, and tools including R script and RStudio. Utilizing many examples and projects, this book teaches you how to get data into R and how to work with that data using R. Once grounded in the fundamentals, the rest of Practical R 4 dives into specific projects and examples starting with running and analyzing a survey using R and LimeSurvey. Next, you'll carry out advanced statistical analysis using R and MouselabWeb. Then, you'll see how R can work for you without statistics, including how R can be used to automate data formatting, manipulation, reporting, and custom functions. The final part of this book discusses using R on a server; you'll build a script with R that can run an RStudio Server and monitor a report source for changes to alert the user when something has changed. This project includes both regular email alerting and push notification. And, finally, you'll use R to create a customized daily rundown report of a person's most important information such as a weather report, daily calendar, to-do's and more. This demonstrates how to automate such a process so that every morning, the user navigates to the same web page and gets the updated report. What You Will Learn Set up and run an R script, including installation on a new machine and downloading and configuring R Turn any machine into a powerful data analytics platform accessible from anywhere with RStudio Server Write basic R scripts and modify existing scripts to suit your own needs Create basic HTML reports in R, inserting information as needed Build a basic R package and distribute it Who This Book Is For Some prior exposure to statistics, programming, and maybe SAS is recommended but not required.
It is generally accepted that training in statistics must include some exposure to the mechanics of computational statistics. This learning guide is intended for beginners in computer-aided statistical data analysis. The prerequisites for XploRe - the statistical computing environment - are an introductory course in statistics or mathematics. The reader of this book should be familiar with basic elements of matrix algebra and the use of HTML browsers. This guide is designed to help students to XploRe their data, to learn (via data interaction) about statistical methods and to disseminate their findings via the HTML outlet. The XploRe APSS (Auto Pilot Support System) is a powerful tool for finding the appropriate statistical technique (quantlet) for the data under analysis. Homogeneous quantlets are combined in XploRe into quantlibs. The XploRe language is intuitive and users with prior experience of other sta tistical programs will find it easy to reproduce the examples explained in this guide. The quantlets in this guide are available on the CD-ROM as well as on the Internet. The statistical operations that the student is guided into range from basic one-dimensional data analysis to more complicated tasks such as time series analysis, multivariate graphics construction, microeconometrics, panel data analysis, etc. The guide starts with a simple data analysis of pullover sales data, then in troduces graphics. The graphics are interactive and cover a wide range of dis plays of statistical data."
This book contains the proceedings of the 12th International Conference on TheoremProvinginHigherOrderLogics(TPHOLs 99), whichwasheldinNice at the University of Nice-Sophia Antipolis, September 14{17, 1999. Thirty- ve papers were submitted as completed research, and each of them was refereed by at least three reviewers appointed by the program committee. Twenty papers were selected for publication in this volume. Followingawell-establishedtraditioninthisseriesofconferences, anumberof researchers also came to discuss work in progress, using short talks and displays at a poster session. These papers are included in a supplementary proceedings volume. These supplementary proceedings take the form of a book published by INRIA in its series of research reports, under the following title: Theorem ProvinginHigherOrderLogics: EmergingTrends1999. The organizers were pleased that Dominique Bolignano, Arjeh Cohen, and Thomas Kropf accepted invitations to be guest speakers for TPHOLs 99. For several years, D. Bolignano has been the leader of the VIP team in the Dyade consortium between INRIA and Bull and is now at the head of a company Trusted Logic. His team has been concentrating on the use of formal methods for the e ective veri cationof securityproperties for protocols used in electronic commerce. A. Cohen has had a key in?uence on the development of computer algebra in The Netherlands and his contribution has been of particular imp- tance to researchersinterested in combining the severalknown methods of using computers to perform mathematical investigations. T. Kropf is an important actor in the Europe-wide project PROSPER, which aims to deliver the be- ts of mechanized formal analysis to system builders in industry."
Increasing the designer's con dence that a piece of software or hardwareis c- pliant with its speci cation has become a key objective in the design process for software and hardware systems. Many approaches to reaching this goal have been developed, including rigorous speci cation, formal veri cation, automated validation, and testing. Finite-state model checking, as it is supported by the explicit-state model checkerSPIN, is enjoying a constantly increasingpopularity in automated property validation of concurrent, message based systems. SPIN has been in large parts implemented and is being maintained by Gerard Ho- mann, and is freely available via ftp fromnetlib.bell-labs.comor from URL http: //cm.bell-labs.com/cm/cs/what/spin/Man/README.html. The beauty of nite-state model checking lies in the possibility of building \push-button" validation tools. When the state space is nite, the state-space traversal will eventually terminate with a de nite verdict on the property that is being validated. Equally helpful is the fact that in case the property is inv- idated the model checker will return a counterexample, a feature that greatly facilitates fault identi cation. On the downside, the time it takes to obtain a verdict may be very long if the state space is large and the type of properties that can be validated is restricted to a logic of rather limited expressiveness.
This compact introduction to Mathematicaaccessible to beginners at all levelspresents the basic elements of the latest version 3 (front End.txt.Int.:, kernel, standard packages). Using examples and exercises not specific to a scientific area, it teaches readers how to effectively solve problems in their own field. The cross-platform CD-ROM contains the entire book in the form of Mathematica notebooks, including color graphics, animations, and hyperlinks, plus the program MathReader.
This unusual introduction to Maple shows readers how Maple or any other computer algebra system fits naturally into a mathematically oriented work environment. Designed for mathematicians, engineers, econometricians, and other scientists, this book shows how computer algebra can enhance their theoretical work. A CD-ROM contains all the Maple worksheets presented in the book.
This Volume contains the Keynote, Invited and Full Contributed papers presented at COMPSTAT'98. A companion volume (Payne & Lane, 1998) contains papers describing the Short Communications and Posters. COMPSTAT is a one-week conference held every two years under the auspices of the International Association of Statistical Computing, a section of the International Statistical Institute. COMPSTAT'98 is organised by IACR-Rothamsted, IACR-Long Ashton, the University of Bristol Department of Mathematics and the University of Bath Department of Mathematical Sciences. It is taking place from 24-28 August 1998 at University of Bristol. Previous COMPSTATs (from 1974-1996) were in Vienna, Berlin, Leiden, Edinburgh, Toulouse, Prague, Rome, Copenhagen, Dubrovnik, Neuchatel, Vienna and Barcelona. The conference is the main European forum for developments at the interface between statistics and computing. This was encapsulated as follows in the COMPSTAT'98 Call for Papers. Statistical computing provides the link between statistical theory and applied statistics. The scientific programme of COMPSTAT ranges over all aspects of this link, from the development and implementation of new computer-based statistical methodology through to innovative applications and software evaluation. The programme should appeal to anyone working in statistics and using computers, whether in universities, industrial companies, research institutes or as software developers.
This upper-division laboratory supplement for courses in abstract algebra consists of several Mathematica packages programmed as a foundation for group and ring theory. Additionally, the "user's guide" illustrates the functionality of the underlying code, while the lab portion of the book reflects the contents of the Mathematica-based electronic notebooks. Students interact with both the printed and electronic versions of the material in the laboratory, and can look up details and reference information in the user's guide. Exercises occur in the stream of the text of the lab, which provides a context within which to answer, and the questions are designed to be either written into the electronic notebook, or on paper. The notebooks are available in both 2.2 and 3.0 versions of Mathematica, and run across all platforms for which Mathematica exits. A very timely and unique addition to the undergraduate abstract algebra curriculum, filling a tremendous void in the literature.
S+SPATIALSTATS is the first comprehensive, object-oriented package for the analysis of spatial data. Providing a whole new set of analysis tools, S+SPATIALSTATS was created specifically for the exploration and modeling of spatially correlated data. It can be used to analyze data arising in areas such as environmental, mining, and petroleum engineering, natural resources, geography, epidemiology, demography, and others where data is sampled spatially. This users manual provides the documentation for the S+SPATIALSTATS module.
This companion to The New Statistical Analysis of Data by Anderson and Finn provides a hands-on guide to data analysis using SPSS. Included with this guide are instructions for obtaining the data sets to be analysed via the World Wide Web. First, the authors provide a brief review of using SPSS, and then, corresponding to the organisation of The New Statistical Analysis of Data, readers participate in analysing many of the data sets discussed in the book. In so doing, students both learn how to conduct reasonably sophisticated statistical analyses using SPSS whilst at the same time gaining an insight into the nature and purpose of statistical investigation.
Maple V Mathematics Programming Guide is the fully updated language and programming reference for Maple V Release 5. It presents a detailed description of Maple V Release 5 - the latest release of the powerful, interactive computer algebra system used worldwide as a tool for problem-solving in mathematics, the sciences, engineering, and education. This manual describes the use of both numeric and symbolic expressions, the data types available, and the programming language statements in Maple. It shows how the system can be extended or customized through user defined routines and gives complete descriptions of the system's user interface and 2D and 3D graphics capabilities.
Applied Predictive Modeling covers the overall predictive modeling process, beginning with the crucial steps of data preprocessing, data splitting and foundations of model tuning. The text then provides intuitive explanations of numerous common and modern regression and classification techniques, always with an emphasis on illustrating and solving real data problems. The text illustrates all parts of the modeling process through many hands-on, real-life examples, and every chapter contains extensive R code for each step of the process. This multi-purpose text can be used as an introduction to predictive models and the overall modeling process, a practitioner's reference handbook, or as a text for advanced undergraduate or graduate level predictive modeling courses. To that end, each chapter contains problem sets to help solidify the covered concepts and uses data available in the book's R package. This text is intended for a broad audience as both an introduction to predictive models as well as a guide to applying them. Non-mathematical readers will appreciate the intuitive explanations of the techniques while an emphasis on problem-solving with real data across a wide variety of applications will aid practitioners who wish to extend their expertise. Readers should have knowledge of basic statistical ideas, such as correlation and linear regression analysis. While the text is biased against complex equations, a mathematical background is needed for advanced topics.
Mathematica (R) in the Laboratory is a hands-on guide which shows how to harness the power and flexibility of Mathematica in the control of data-acquisition equipment and the analysis of experimental data. It explains how to use Mathematica to import, manipulate, visualise and analyse data from existing files. The generation and export of test data are also covered. The control of laboratory equipment is dealt with in detail, including the use of Mathematica's MathLink (R) system in instrument control, data processing, and interfacing. Many practical examples are given, which can either be used directly or adapted to suit a particular application. The book sets out clearly how Mathematica can provide a truly unified data-handling environment, and will be invaluable to anyone who collects or analyses experimental data, including astronomers, biologists, chemists, mathematicians, geologists, physicists and engineers. The book is fully compatible with Mathematica 3.0.
COMPSTAT symposia have been held regularly since 1974 when they started in Vienna. This tradition has made COMPSTAT a major forum for the interplay of statistics and computer sciences with contributions from many well known scientists all over the world. The scientific programme of COMPSTAT '96 covers all aspects of this interplay, from user-experiences and evaluation of software through the development and implementation of new statistical ideas. All papers presented belong to one of the three following categories: - Statistical methods (preferable new ones) that require a substantial use of computing; - Computer environments, tools and software useful in statistics; - Applications of computational statistics in areas of substantial interest (environment, health, industry, biometrics, etc.).
Using a visual data analysis approach, wavelet concepts are explained in a way that is intuitive and easy to understand. Furthermore, in addition to wavelets, a whole range of related signal processing techniques such as wavelet packets, local cosine analysis, and matching pursuits are covered, and applications of wavelet analysis are illustrated -including nonparametric function estimation, digital image compression, and time-frequency signal analysis. This book and software package is intended for a broad range of data analysts, scientists, and engineers. While most textbooks on the subject presuppose advanced training in mathematics, this book merely requires that readers be familiar with calculus and linear algebra at the undergraduate level.
A greatly expanded and heavily revised second edition, this popular
guide provides instructions and clear examples for running analyses
of variance (ANOVA) and several other related statistical tests of
significance with SPSS. No other guide offers the program
statements required for the more advanced tests in analysis of
variance. All of the programs in the book can be run using any
version of SPSS, including versions 11 and 11.5. A table at the end
of the preface indicates where each type of analysis (e.g., simple
comparisons) can be found for each type of design (e.g., mixed
two-factor design).
Now in its second edition, this textbook provides an introduction to Python and its use for statistical data analysis. It covers common statistical tests for continuous, discrete and categorical data, as well as linear regression analysis and topics from survival analysis and Bayesian statistics. For this new edition, the introductory chapters on Python, data input and visualization have been reworked and updated. The chapter on experimental design has been expanded, and programs for the determination of confidence intervals commonly used in quality control have been introduced. The book also features a new chapter on finding patterns in data, including time series. A new appendix describes useful programming tools, such as testing tools, code repositories, and GUIs. The provided working code for Python solutions, together with easy-to-follow examples, will reinforce the reader's immediate understanding of the topic. Accompanying data sets and Python programs are also available online. With recent advances in the Python ecosystem, Python has become a popular language for scientific computing, offering a powerful environment for statistical data analysis. With examples drawn mainly from the life and medical sciences, this book is intended primarily for masters and PhD students. As it provides the required statistics background, the book can also be used by anyone who wants to perform a statistical data analysis.
The book, which contains over two hundred illustrations, is designed for use in school computer labs or with home computers, running the computer algebra system Maple, or its student version. It supports the interactive Maple worksheets, which the authors have developed and which are available free of charge via anonymous ftp (ftp.utirc.utoronto.ca (/pub/ednet/maths/maple)). The book addresses readers who are learning calculus at a pre-university level.
The emphasis of the book is given in how to construct different types of solutions (exact, approximate analytical, numerical, graphical) of numerous nonlinear PDEs correctly, easily, and quickly. The reader can learn a wide variety of techniques and solve numerous nonlinear PDEs included and many other differential equations, simplifying and transforming the equations and solutions, arbitrary functions and parameters, presented in the book). Numerous comparisons and relationships between various types of solutions, different methods and approaches are provided, the results obtained in Maple and Mathematica, facilitates a deeper understanding of the subject. Among a big number of CAS, we choose the two systems, Maple and Mathematica, that are used worldwide by students, research mathematicians, scientists, and engineers. As in the our previous books, we propose the idea to use in parallel both systems, Maple and Mathematica, since in many research problems frequently it is required to compare independent results obtained by using different computer algebra systems, Maple and/or Mathematica, at all stages of the solution process. One of the main points (related to CAS) is based on the implementation of a whole solution method (e.g. starting from an analytical derivation of exact governing equations, constructing discretizations and analytical formulas of a numerical method, performing numerical procedure, obtaining various visualizations, and comparing the numerical solution obtained with other types of solutions considered in the book, e.g. with asymptotic solution).
These lecture notes provide a rapid, accessible introduction to Bayesian statistical methods. The course covers the fundamental philosophy and principles of Bayesian inference, including the reasoning behind the prior/likelihood model construction synonymous with Bayesian methods, through to advanced topics such as nonparametrics, Gaussian processes and latent factor models. These advanced modelling techniques can easily be applied using computer code samples written in Python and Stan which are integrated into the main text. Importantly, the reader will learn methods for assessing model fit, and to choose between rival modelling approaches. |
![]() ![]() You may like...
Do. Fail. Learn. Repeat. - The Truth…
Nicholas Haralambous
Paperback
Survival & Rescue Equipment of World War…
Dustin Clingenpeel
Hardcover
R1,820
Discovery Miles 18 200
Caring for the Retarded in America - A…
Leland Bell, Peter L. Tyor
Hardcover
R2,916
Discovery Miles 29 160
Television in Turkey - Local Production…
Yesim Kaptan, Ece Algan
Hardcover
R3,555
Discovery Miles 35 550
|