![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
This textbook offers an approachable introduction to stochastic processes that explores the four pillars of random walk, branching processes, Brownian motion, and martingales. Building from simple examples, the authors focus on developing context and intuition before formalizing the theory of each topic. This inviting approach illuminates the key ideas and computations in the proofs, forming an ideal basis for further study. Consisting of many short chapters, the book begins with a comprehensive account of the simple random walk in one dimension. From here, different paths may be chosen according to interest. Themes span Poisson processes, branching processes, the Kolmogorov-Chentsov theorem, martingales, renewal theory, and Brownian motion. Special topics follow, showcasing a selection of important contemporary applications, including mathematical finance, optimal stopping, ruin theory, branching random walk, and equations of fluids. Engaging exercises accompany the theory throughout. Random Walk, Brownian Motion, and Martingales is an ideal introduction to the rigorous study of stochastic processes. Students and instructors alike will appreciate the accessible, example-driven approach. A single, graduate-level course in probability is assumed.
Machine learning methods are now an important tool for scientists, researchers, engineers and students in a wide range of areas. This book is written for people who want to adopt and use the main tools of machine learning, but aren't necessarily going to want to be machine learning researchers. Intended for students in final year undergraduate or first year graduate computer science programs in machine learning, this textbook is a machine learning toolkit. Applied Machine Learning covers many topics for people who want to use machine learning processes to get things done, with a strong emphasis on using existing tools and packages, rather than writing one's own code. A companion to the author's Probability and Statistics for Computer Science, this book picks up where the earlier book left off (but also supplies a summary of probability that the reader can use). Emphasizing the usefulness of standard machinery from applied statistics, this textbook gives an overview of the major applied areas in learning, including coverage of:* classification using standard machinery (naive bayes; nearest neighbor; SVM)* clustering and vector quantization (largely as in PSCS)* PCA (largely as in PSCS)* variants of PCA (NIPALS; latent semantic analysis; canonical correlation analysis)* linear regression (largely as in PSCS)* generalized linear models including logistic regression* model selection with Lasso, elasticnet* robustness and m-estimators* Markov chains and HMM's (largely as in PSCS)* EM in fairly gory detail; long experience teaching this suggests one detailed example is required, which students hate; but once they've been through that, the next one is easy* simple graphical models (in the variational inference section)* classification with neural networks, with a particular emphasis onimage classification* autoencoding with neural networks* structure learning
"This would be an excellent book for undergraduate, graduate and beyond....The style of writing is easy to read and the author does a good job of adding humor in places. The integration of basic programming in R with the data that is collected for any experiment provides a powerful platform for analysis of data.... having the understanding of data analysis that this book offers will really help researchers examine their data and consider its value from multiple perspectives - and this applies to people who have small AND large data sets alike! This book also helps people use a free and basic software system for processing and plotting simple to complex functions." Michelle Pantoya, Texas Tech University Measurements of quantities that vary in a continuous fashion, e.g., the pressure of a gas, cannot be measured exactly and there will always be some uncertainty with these measured values, so it is vital for researchers to be able to quantify this data. Uncertainty Analysis of Experimental Data with R covers methods for evaluation of uncertainties in experimental data, as well as predictions made using these data, with implementation in R. The books discusses both basic and more complex methods including linear regression, nonlinear regression, and kernel smoothing curve fits, as well as Taylor Series, Monte Carlo and Bayesian approaches. Features: 1. Extensive use of modern open source software (R). 2. Many code examples are provided. 3. The uncertainty analyses conform to accepted professional standards (ASME). 4. The book is self-contained and includes all necessary material including chapters on statistics and programming in R. Benjamin D. Shaw is a professor in the Mechanical and Aerospace Engineering Department at the University of California, Davis. His research interests are primarily in experimental and theoretical aspects of combustion. Along with other courses, he has taught undergraduate and graduate courses on engineering experimentation and uncertainty analysis. He has published widely in archival journals and became an ASME Fellow in 2003.
This book focuses is on data science. It includes plenty of actual examples of the typical data processing and data presentations required of a professional data scientist. The material will be especially useful to the growing profession of data scientists. As a practitioner, the author brings a practical view on the topic, with a very hands-on oriented presentation that will be particularly useful to other practitioners. The book also concentrates on the current generation of R packages that have added considerable capability to R, including Hadley Wickam's suite of packages, such as tidyr, dplyr, lubridate, stringr, and ggplot2.
Many professional, high-quality surveys collect data on people's behaviour, experiences, lifestyles and attitudes. The data they produce is more accessible than ever before. This book provides students with a comprehensive introduction to using this data, as well as transactional data and big data sources, in their own research projects. Here you will find all you need to know about locating, accessing, preparing and analysing secondary data, along with step-by-step instructions for using IBM SPSS Statistics. You will learn how to: Create a robust research question and design that suits secondary analysis Locate, access and explore data online Understand data documentation Check and 'clean' secondary data Manage and analyse your data to produce meaningful results Replicate analyses of data in published articles and books Using case studies and video animations to illustrate each step of your research, this book provides you with the quantitative analysis skills you'll need to pass your course, complete your research project and compete in the job market. Exercises throughout the book and on the book's companion website give you an opportunity to practice, check your understanding and work hands on with real data as you're learning.
Financial Econometrics Using Stata is an essential reference for graduate students, researchers, and practitioners who use Stata to perform intermediate or advanced methods. After discussing the characteristics of financial time series, the authors provide introductions to ARMA models, univariate GARCH models, multivariate GARCH models, and applications of these models to financial time series. The last two chapters cover risk management and contagion measures. After a rigorous but intuitive overview, the authors illustrate each method by interpreting easily replicable Stata examples.
In computational science, reproducibility requires that researchers make code and data available to others so that the data can be analyzed in a similar manner as in the original publication. Code must be available to be distributed, data must be accessible in a readable format, and a platform must be available for widely distributing the data and code. In addition, both data and code need to be licensed permissively enough so that others can reproduce the work without a substantial legal burden. Implementing Reproducible Research covers many of the elements necessary for conducting and distributing reproducible research. It explains how to accurately reproduce a scientific result. Divided into three parts, the book discusses the tools, practices, and dissemination platforms for ensuring reproducibility in computational science. It describes: Computational tools, such as Sweave, knitr, VisTrails, Sumatra, CDE, and the Declaratron system Open source practices, good programming practices, trends in open science, and the role of cloud computing in reproducible research Software and methodological platforms, including open source software packages, RunMyCode platform, and open access journals Each part presents contributions from leaders who have developed software and other products that have advanced the field. Supplementary material is available at www.ImplementingRR.org.
Click on the Supplements tab above for further details on the different versions of SPSS programs.Making statistics-and statistical software-accessible and rewarding This book provides readers with step-by-step guidance on running a wide variety of statistical analyses in IBM(R) SPSS(R) Statistics, Stata, and other programs. Author David Kremelberg begins his user-friendly text by covering charts and graphs through regression, time-series analysis, and factor analysis. He provides a background of the method, then explains how to run these tests in IBM SPSS and Stata. He then progresses to more advanced kinds of statistics such as HLM and SEM, where he describes the tests and explains how to run these tests in their appropriate software including HLM and AMOS. This is an invaluable guide for upper-level undergraduate and graduate students across the social and behavioral sciences who need assistance in understanding the various statistical packages.
Introductory Statistics for Health & Nursing using SPSS is an impressive introductory statistics text ideal for all health science and nursing students. Health and nursing students can be anxious and lacking in confidence when it comes to handling statistics. This book has been developed with this readership in mind. This accessible text eschews long and off-putting statistical formulae in favour of non-daunting practical and SPSS-based examples. What's more, its content will fit ideally with the common course content of stats courses in the field. Introductory Statistics for Health & Nursing using SPSS is also accompanied by a companion website containing data-sets and examples for use by lecturers with their students. The inclusion of real-world data and a host of health-related examples should make this an ideal core text for any introductory statistics course in the field.
SPSS syntax is the command language used by SPSS to carry out all of its commands and functions. In this book, Jacqueline Collier introduces the use of syntax to those who have not used it before, or who are taking their first steps in using syntax. Without requiring any knowledge of programming, the text outlines: - how to become familiar with the syntax commands; - how to create and manage the SPSS journal and syntax files; - and how to use them throughout the data entry, management and analysis process. Collier covers all aspects of data management from data entry through to data analysis, including managing the errors and the error messages created by SPSS. Syntax commands are clearly explained and the value of syntax is demonstrated through examples. This book also supports the use of SPSS syntax alongside the usual button and menu-driven graphical interface (GIF) using the two methods together, in a complementary way. The book is written in such a way as to enable you to pick and choose how much you rely on one method over the other, encouraging you to use them side-by-side, with a gradual increase in use of syntax as your knowledge, skills and confidence develop. This book is ideal for all those carrying out quantitative research in the health and social sciences who can benefit from SPSS syntax's capacity to save time, reduce errors and allow a data audit trail.
This engaging book is a concise introduction to the essentials of the MATLAB programming language and is ideal for readers seeking a focused and brief approach to the software. Learning MATLAB contains numerous examples and exercises involving the software's most useful and sophisticated features and an overview of the most common scientific computing tasks for which it can be used. The presentation is designed to guide new users through the basics of interacting with and programming in the MATLAB software, while also presenting some of its more important and advanced techniques, including how to solve common problem types in scientific computing. Rather than including exhaustive technical material, the author teaches through readily understood examples and numerous exercises that range from straightforward to very challenging. Readers are encouraged to learn by doing: entering the examples themselves, reading the online help, and trying the excercises. This engaging book is a concise introduction to the essentials of the MATLAB programming language. Learning MATLAB is ideal for readers seeking:* A focused and brief approach to the software.* Numerous examples and exercises involving the software's most useful and sophisticated features.* An overview of the most common scientific computing tasks for which the software can be used.
Accessibly written and easy to use, Applied Statistics Using SPSS is an all-in-one self-study guide to SPSS and do-it-yourself guide to statistics. Based around the needs of undergraduate students embarking on their own research project, the text's self-help style is designed to boost the skills and confidence of those that will need to use SPSS in the course of doing their research project. The book is pedagogically well developed and contains many screen dumps and exercises, glossary terms and worked examples. Divided into two parts, Applied Statistics Using SPSS covers : 1. A self-study guide for learning how to use SPSS. 2. A reference guide for selecting the appropriate statistical technique and a stepwise do-it-yourself guide for analysing data and interpreting the results. 3. Readers of the book can download the SPSS data file that is used for most of the examples throughout the book. Geared explicitly for undergraduate needs, this is an easy to follow SPSS book that should provide a step-by-step guide to research design and data analysis using SPSS.
SPSS for Windows is the most widely used computer package for analyzing quantitative data. In a clear, readable, non-technical style, this book teaches beginners how to use the program, input and manipulate data, use descriptive analyses and inferential techniques, including: t-tests, analysis of variance, correlation and regression, nonparametric techniques, and reliability analysis and factor analysis. The author provides an overview of statistical analysis, and then shows in a simple step-by-step method how to set up an SPSS file in order to run an analysis as well as how to graph and display data. He explains how to use SPSS for all the main statistical approaches you would expect to find in an introductory statistics course. The book is written for users of Versions 6 and 6.1, but will be equally valuable to users of later versions.
Ideal for those already familiar with basic Excel features, this updated Third Edition of Neil J. Salkind's Excel Statistics: A Quick Guide shows readers how to utilize Microsoft (R) Excel's functions and Analysis ToolPak to answer simple and complex questions about data. Part I explores 35 Excel functions, while Part II contains 20 Analysis ToolPak tools. To make it easy to see what each function or tool looks like when applied, at-a-glance two-page spreads describe each function and its use with corresponding screenshots. In addition, actual data files used in the examples are readily available online at an open-access Student Study Site.
A new and refreshingly different approach to presenting the foundations of statistical algorithms, Foundations of Statistical Algorithms: With References to R Packages reviews the historical development of basic algorithms to illuminate the evolution of today's more powerful statistical algorithms. It emphasizes recurring themes in all statistical algorithms, including computation, assessment and verification, iteration, intuition, randomness, repetition and parallelization, and scalability. Unique in scope, the book reviews the upcoming challenge of scaling many of the established techniques to very large data sets and delves into systematic verification by demonstrating how to derive general classes of worst case inputs and emphasizing the importance of testing over a large number of different inputs. Broadly accessible, the book offers examples, exercises, and selected solutions in each chapter as well as access to a supplementary website. After working through the material covered in the book, readers should not only understand current algorithms but also gain a deeper understanding of how algorithms are constructed, how to evaluate new algorithms, which recurring principles are used to tackle some of the tough problems statistical programmers face, and how to take an idea for a new method and turn it into something practically useful.
Since the first edition of this book was published, S-PLUS has evolved markedly with new methods of analysis, new graphical procedures, and a convenient graphical user interface (GUI). Today, S-PLUS is the statistical software of choice for many applied researchers in disciplines ranging from finance to medicine. Combining the command line language and GUI of S-PLUS now makes this book even more suitable for inexperienced users, students, and anyone without the time, patience, or background needed to wade through the many more advanced manuals and texts on the market.
Sage est un logiciel libre de calcul mathematique s'appuyant sur le langage de programmation Python. Ses auteurs, une communaute internationale de centaines d'enseignants et de chercheurs, se sont donne pour mission de fournir une alternative viable aux logiciels Magma, Maple, Mathematica et Matlab. Sage fait appel pour cela a de multiples logiciels libres existants, comme GAP, Maxima, PARI et diverses bibliotheques scientifiques pour Python, auxquels il ajoute des milliers de nouvelles fonctions. Il est disponible gratuitement et fonctionne sur les systemes d'exploitation usuels. Pour les lyceens, Sage est une formidable calculatrice scientifique et graphique. Il assiste efficacement l'etudiant de premier cycle universitaire dans ses calculs en analyse, en algebre lineaire, etc. Pour la suite du parcours universitaire, ainsi que pour les chercheurs et les ingenieurs, Sage propose les algorithmes les plus recents dans diverses branches des mathematiques. De ce fait, de nombreuses universites enseignent Sage des le premier cycle pour les travaux pratiques et les projets. Ce livre est le premier ouvrage generaliste sur Sage, toutes langues confondues. Coecrit par des enseignants et chercheurs intervenant a tous les niveaux (IUT, classes preparatoires, licence, master, doctorat), il met l'accent sur les mathematiques sous-jacentes a une bonne comprehension du logiciel. En cela, il correspond plus a un cours de mathematiques effectives illustre par des exemples avec Sage qu'a un mode d'emploi ou un manuel de reference. La premiere partie est accessible aux eleves de licence. Le contenu des parties suivantes s'inspire du programme de l'epreuve de modelisation de l'agregation de mathematiques. Ce livre est diffuse sous licence libre Creative Commons. Il peut etre telecharge gratuitement depuis http: //sagebook.gforge.inria.fr/.
In this second edition of An Introduction to Stata Programming, the author introduces concepts by providing the background and importance for the topic, presents common uses and examples, then concludes with larger, more applied examples referred to as "cookbook recipes." This is a great reference for anyone who wants to learn Stata programming. For those learning, the author assumes familiarity with Stata and gradually introduces more advanced programming tools. For the more advanced Stata programmer, the book introduces Stata's Mata programming language and optimization routines.
This book highlights recent advances in natural computing, including biology and its theory, bio-inspired computing, computational aesthetics, computational models and theories, computing with natural media, philosophy of natural computing, and educational technology. It presents extended versions of the best papers selected from the "8th International Workshop on Natural Computing" (IWNC8), a symposium held in Hiroshima, Japan, in 2014. The target audience is not limited to researchers working in natural computing but also includes those active in biological engineering, fine/media art design, aesthetics, and philosophy.
This book offers an introduction to computer programming, numerical analysis, and other mathematical ideas that extend the basic topics learned in calculus. It illustrates how mathematicians and scientists write computer programs, covering the general building blocks of programming languages and a description of how these concepts fit together to allow computers to produce the results they do. Topics explored here include binary arithmetic, algorithms for rendering graphics, the smooth interpolation of discrete data, and the numerical approximation of non-elementary integrals. The book uses an open-source computer algebra system called Maxima. Using Maxima, first-time programmers can perform familiar tasks, such as graphing functions or solving equations, and learn the basic structures of programming before moving on to other popular programming languages. The epilogue provides some simple examples of how this process works in practice. The book will particularly appeal to students who have finished their calculus sequence.
This unique text uses tried and tested methods developed by the authors during a recent Nuffield Foundation project set up to investigate maths needs for GNVQ. Most areas of engineering mathematics are well suited to the spreadsheet approach and many students find the techniques easier to handle than more conventional computational methods. This text covers spreadsheet procedures for application of number core skills, and about two thirds of the areas of maths needed for the GNVQ Maths for Engineering (advanced) unit, but does not attempt to cover those problems more sensibly tackled by conventional means. By attempting the questions and assignments within the book the student compiles a record of competence, suitable for inclusion in the portfolio and assessment. To this end worked examples have solutions, but answers to all other exercises are to be found in the lecturers' guide, available free to teaching staff only. For lecturers' guide please write on college headed paper, and enclosing an A4 sized SAE to: Marketing Department, Edward Arnold, 338 Euston Road, London NW1 3BH, UK. Please note: the guide is free of copyright to purchasers of the textbook.
The Statistical Imagination, a basic social science statistics text with illustrations and exercises for sociology, social work, political science, and criminal justice courses, teaches readers that statistics is not just a mathematical exercise; it is a way of analyzing and understanding the social world. Praised for a writing style that takes the anxiety out of statistics courses, the author explains basic statistical principles through a variety of engaging exercises, each designed to illuminate the unique theme of examining society both creatively and logically. In an effort to make the study of statistics relevant to students of the social sciences, the author encourages readers to interpret the results of calculations in the context of more substantive social issues, while continuing to value precise and accurate research. Ritchey begins by introducing students to the essentials of learning statistics; fractions, proportions, percentages, standard deviation, sampling error and sampling distribution, along with other math hurdles, are clearly explained to fill in any math gaps students may bring to the classroom. Treating statistics as a skill learned best by doing, the author supplies a range of student-friendly questions and exercises to both demystify the calculation process, and to encourage the kind of proportional thinking needed to master the subject. In addition to pencil-and-paper exercises, The Statistical Imagination includes computer-based assignments for use with the free Student Version SPSS 9.0 CD-ROM that accompanies each new copy of the book.
Designed for anyone who needs a comprehensive introduction to the principles of statistical methods and their applications, this text is written in a practical, non-threatening style. Step-by-step worked examples are used to illustrate the use of statistical techniques in solving practical problems while self-study exercises test students' knowledge. The use of Excel and MINITAB is fully integrated throughout the book to demonstrate the application of computer packages to solve a wide range of statistical problems. Presented alongside manual methods, these computer solutions include detailed instructions and annotated print outs where appropriate. The second edition retains the straightforward writing style and practical illustration of manual and computer methods which made the previous book successful for a wide range of courses. |
![]() ![]() You may like...
Toeplitz Matrices and Singular Integral…
Albrecht Bottcher, Israel Gohberg, …
Hardcover
R2,612
Discovery Miles 26 120
Schur Functions, Operator Colligations…
Daniel Alpay, Etc, …
Hardcover
R2,582
Discovery Miles 25 820
Nonselfadjoint Operator Algebras…
Hari Bercovici, Ciprian Foias, …
Hardcover
R2,593
Discovery Miles 25 930
Music Through Fourier Space - Discrete…
Emmanuel Amiot
Hardcover
|