![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
The updated guide to the newest graphing calculator from Texas Instruments The TI-Nspire graphing calculator is popular among high school and college students as a valuable tool for calculus, AP calculus, and college-level algebra courses. Its use is allowed on the major college entrance exams. This book is a nuts-and-bolts guide to working with the TI-Nspire, providing everything you need to get up and running and helping you get the most out of this high-powered math tool.Texas Instruments' TI-Nspire graphing calculator is perfect for high school and college students in advanced algebra and calculus classes as well as students taking the SAT, PSAT, and ACT examsThis fully updated guide covers all enhancements to the TI-Nspire, including the touchpad and the updated software that can be purchased along with the deviceShows how to get maximum value from this versatile math tool With updated screenshots and examples, "TI-Nspire For Dummies" provides practical, hands-on instruction to help students make the most of this revolutionary graphing calculator.
The goal of this book is to gather in a single work the most relevant concepts related in optimization methods, showing how such theories and methods can be addressed using the open source, multi-platform R tool. Modern optimization methods, also known as metaheuristics, are particularly useful for solving complex problems for which no specialized optimization algorithm has been developed. These methods often yield high quality solutions with a more reasonable use of computational resources (e.g. memory and processing effort). Examples of popular modern methods discussed in this book are: simulated annealing; tabu search; genetic algorithms; differential evolution; and particle swarm optimization. This book is suitable for undergraduate and graduate students in computer science, information technology, and related areas, as well as data analysts interested in exploring modern optimization methods using R. This new edition integrates the latest R packages through text and code examples. It also discusses new topics, such as: the impact of artificial intelligence and business analytics in modern optimization tasks; the creation of interactive Web applications; usage of parallel computing; and more modern optimization algorithms (e.g., iterated racing, ant colony optimization, grammatical evolution).
Michael Mitchell's A Visual Guide to Stata Graphics, Fourth Edition provides an essential introduction and reference for Stata graphics. The fourth edition retains the features that made the first three editions so useful: A complete guide to Stata's graph command Exhaustive examples of customized graphs Visual indexing of features-just look for a picture that matches what you want to do This edition includes new discussions of color, Unicode characters, export formats, sizing of graph elements, and schemes. The section on colors has been greatly expanded to include over 50 examples that demonstrate how to modify colors, add transparency, and change intensity. In the discussion of text modifications, Mitchell now shows how to include Unicode characters such as Greek letters, symbols, and emojis. New examples have also been added that show how to change the size of graph elements such as text, markers, and line widths using both absolute units (points, inches, and centimeters) as well as relative units (line large or *2 for two times the original size). Finally, the look of graphs throughout the book has changed-most graphs are now created using a common updated scheme. The book's visual style makes it easy to find exactly what you need. A color-coded, visual table of contents runs along the edge of every page and shows readers exactly where they are in the book. You can see the color-coded chapter tabs without opening the book, providing quick visual access to each chapter. The heart of each chapter is a series of entries that are typically formatted three to a page. Each entry shows a graph command (with the emphasized portion of the command highlighted in red), the resulting graph, a description of what is being done, and the dataset used. Because every feature, option, and edit is demonstrated with a graph, you can often flip through a section of the book to find exactly the effect you are seeking. The book begins with an introduction to Stata graphs that includes an overview of graphs types, schemes, and options and the process of building a graph. Then, it turns to detailed discussions of many graph types-scatterplots, regression fit plots, line plots, contour plots, bar graphs, box plots, and many others. Mitchell shows how to create each type of graph and how to use options to control the look of the graph. Because Stata's graph command will let you customize any aspect of the graph, Mitchell spends ample time showing you the most valuable options for obtaining the look you want. If you are in a hurry to discover one special option, you can skim the chapter until you see the effect you want and then glance at the command to see what is highlighted in red. After focusing on specific types of graphs, Mitchell undertakes an in-depth presentation of the options available across almost all graph types. This includes options that add and change the look of titles, notes, and such; control the number of ticks on axes; control the content and appearance of the numbers and labels on axes; control legends; add and change the look of annotations; graph over subgroups; change the look of markers and their labels; size graphs and their elements; and more. To complete the graphical journey, Mitchell discusses and demonstrates the 12 styles that unite and control the appearance of the myriad graph objects. These styles are angles, colors, clock positions, compass directions, connecting points, line patterns, line widths, margins, marker sizes, orientations, marker symbols, and text sizes. You won't want to overlook the appendix in this book. There Mitchell first gives a quick overview of the dozens of statistical graph commands that are not strictly the subject of the book. Even so, these commands use the graph command as an engine to draw their graphs; therefore, almost all that Mitchell has discussed applies to them. He also addresses combining graphs-showing you how to create complex and multipart images from previously created graphs. In a crucial section titled "Putting it all together", Mitchell shows us how to do just that. We learn more about overlaying twoway plots, and we learn how to combine data management and graphics to create plots such as bar charts of rates with capped confidence intervals. Mitchell concludes by warning us about mistakes that can be made when typing graph commands and how to correct them. The fourth edition of A Visual Guide to Stata Graphics is a complete guide to Stata's graph command and the associated Graph Editor. Whether you want to tame the Stata graph command, quickly find out how to produce a graphical effect, or learn approaches that can be used to construct custom graphs, this is the book to read.
This advanced textbook explores small area estimation techniques, covers the underlying mathematical and statistical theory and offers hands-on support with their implementation. It presents the theory in a rigorous way and compares and contrasts various statistical methodologies, helping readers understand how to develop new methodologies for small area estimation. It also includes numerous sample applications of small area estimation techniques. The underlying R code is provided in the text and applied to four datasets that mimic data from labor markets and living conditions surveys, where the socioeconomic indicators include the small area estimation of total unemployment, unemployment rates, average annual household incomes and poverty indicators. Given its scope, the book will be useful for master and PhD students, and for official and other applied statisticians.
Chunyan Li is a course instructor with many years of experience in teaching about time series analysis. His book is essential for students and researchers in oceanography and other subjects in the Earth sciences, looking for a complete coverage of the theory and practice of time series data analysis using MATLAB. This textbook covers the topic's core theory in depth, and provides numerous instructional examples, many drawn directly from the author's own teaching experience, using data files, examples, and exercises. The book explores many concepts, including time; distance on Earth; wind, current, and wave data formats; finding a subset of ship-based data along planned or random transects; error propagation; Taylor series expansion for error estimates; the least squares method; base functions and linear independence of base functions; tidal harmonic analysis; Fourier series and the generalized Fourier transform; filtering techniques: sampling theorems: finite sampling effects; wavelet analysis; and EOF analysis.
This book provides a concise point of reference for the most commonly used regression methods. It begins with linear and nonlinear regression for normally distributed data, logistic regression for binomially distributed data, and Poisson regression and negative-binomial regression for count data. It then progresses to these regression models that work with longitudinal and multi-level data structures. The volume is designed to guide the transition from classical to more advanced regression modeling, as well as to contribute to the rapid development of statistics and data science. With data and computing programs available to facilitate readers' learning experience, Statistical Regression Modeling promotes the applications of R in linear, nonlinear, longitudinal and multi-level regression. All included datasets, as well as the associated R program in packages nlme and lme4 for multi-level regression, are detailed in Appendix A. This book will be valuable in graduate courses on applied regression, as well as for practitioners and researchers in the fields of data science, statistical analytics, public health, and related fields.
This book discusses the development of the Rosenbrock-Wanner methods from the origins of the idea to current research with the stable and efficient numerical solution and differential-algebraic systems of equations, still in focus. The reader gets a comprehensive insight into the classical methods as well as into the development and properties of novel W-methods, two-step and exponential Rosenbrock methods. In addition, descriptive applications from the fields of water and hydrogen network simulation and visual computing are presented.
R, an Open Source software, has become the "de facto" statistical computing environment. It has an excellent collection of data manipulation and graphics capabilities. It is extensible and comes with a large number of packages that allow statistical analysis at all levels - from simple to advanced - and in numerous fields including Medicine, Genetics, Biology, Environmental Sciences, Geology, Social Sciences and much more. The software is maintained and developed by academicians and professionals and as such, is continuously evolving and up to date. "Statistics and Data with R" presents an accessible guide to data manipulations, statistical analysis and graphics using R. Assuming no previous knowledge of statistics or R, the book includes: A comprehensive introduction to the R language. An integrated approach to importing and preparing data for analysis, exploring and analyzing the data, and presenting results. Over 300 examples, including detailed explanations of the R scripts used throughout. Over 100 moderately large data sets from disciplines ranging from Biology, Ecology and Environmental Science to Medicine, Law, Military and Social Sciences. A parallel discussion of analyses with the normal density, proportions (binomial), counts (Poisson) and bootstrap methods. Two extensive indexes that include references to every R function (and its arguments and packages used in the book and to every introduced concept. An accompanying Wiki website, http: //turtle.gis.umn.eduincludes all the scripts and data used in the book. The website also features a solutions manual, providing answers to all of the excercises presented in the book. Visitors are invited to download/upload data andscripts and share comments, suggestions and questions with other visitors. Students, researchers and practitioners will find this to be both a valuable learning resource in statistics and R and an excellent reference book.
This book presents the state of the art on numerical semigroups and related subjects, offering different perspectives on research in the field and including results and examples that are very difficult to find in a structured exposition elsewhere. The contents comprise the proceedings of the 2018 INdAM "International Meeting on Numerical Semigroups", held in Cortona, Italy. Talks at the meeting centered not only on traditional types of numerical semigroups, such as Arf or symmetric, and their usual properties, but also on related types of semigroups, such as affine, Puiseux, Weierstrass, and primary, and their applications in other branches of algebra, including semigroup rings, coding theory, star operations, and Hilbert functions. The papers in the book reflect the variety of the talks and derive from research areas including Semigroup Theory, Factorization Theory, Algebraic Geometry, Combinatorics, Commutative Algebra, Coding Theory, and Number Theory. The book is intended for researchers and students who want to learn about recent developments in the theory of numerical semigroups and its connections with other research fields.
Cohesively Incorporates Statistical Theory with R Implementation Since the publication of the popular first edition of this comprehensive textbook, the contributed R packages on CRAN have increased from around 1,000 to over 6,000. Designed for an intermediate undergraduate course, Probability and Statistics with R, Second Edition explores how some of these new packages make analysis easier and more intuitive as well as create more visually pleasing graphs. New to the Second Edition Improvements to existing examples, problems, concepts, data, and functions New examples and exercises that use the most modern functions Coverage probability of a confidence interval and model validation Highlighted R code for calculations and graph creation Gets Students Up to Date on Practical Statistical Topics Keeping pace with today's statistical landscape, this textbook expands your students' knowledge of the practice of statistics. It effectively links statistical concepts with R procedures, empowering students to solve a vast array of real statistical problems with R. Web Resources A supplementary website offers solutions to odd exercises and templates for homework assignments while the data sets and R functions are available on CRAN.
This book presents an introduction to structural equation modeling (SEM) and facilitates the access of students and researchers in various scientific fields to this powerful statistical tool. It offers a didactic initiation to SEM as well as to the open-source software, lavaan, and the rich and comprehensive technical features it offers. Structural Equation Modeling with lavaan thus helps the reader to gain autonomy in the use of SEM to test path models and dyadic models, perform confirmatory factor analyses and estimate more complex models such as general structural models with latent variables and latent growth models. SEM is approached both from the point of view of its process (i.e. the different stages of its use) and from the point of view of its product (i.e. the results it generates and their reading).
This book explores missing data techniques and provides a detailed and easy-to-read introduction to multiple imputation, covering the theoretical aspects of the topic and offering hands-on help with the implementation. It discusses the pros and cons of various techniques and concepts, including multiple imputation quality diagnostics, an important topic for practitioners. It also presents current research and new, practically relevant developments in the field, and demonstrates the use of recent multiple imputation techniques designed for situations where distributional assumptions of the classical multiple imputation solutions are violated. In addition, the book features numerous practical tutorials for widely used R software packages to generate multiple imputations (norm, pan and mice). The provided R code and data sets allow readers to reproduce all the examples and enhance their understanding of the procedures. This book is intended for social and health scientists and other quantitative researchers who analyze incompletely observed data sets, as well as master's and PhD students with a sound basic knowledge of statistics.
This text examines the goals of data analysis with respect to enhancing knowledge, and identifies data summarization and correlation analysis as the core issues. Data summarization, both quantitative and categorical, is treated within the encoder-decoder paradigm bringing forward a number of mathematically supported insights into the methods and relations between them. Two Chapters describe methods for categorical summarization: partitioning, divisive clustering and separate cluster finding and another explain the methods for quantitative summarization, Principal Component Analysis and PageRank. Features: * An in-depth presentation of K-means partitioning including a corresponding Pythagorean decomposition of the data scatter. * Advice regarding such issues as clustering of categorical and mixed scale data, similarity and network data, interpretation aids, anomalous clusters, the number of clusters, etc. * Thorough attention to data-driven modelling including a number of mathematically stated relations between statistical and geometrical concepts including those between goodness-of-fit criteria for decision trees and data standardization, similarity and consensus clustering, modularity clustering and uniform partitioning. New edition highlights: * Inclusion of ranking issues such as Google PageRank, linear stratification and tied rankings median, consensus clustering, semi-average clustering, one-cluster clustering * Restructured to make the logics more straightforward and sections self-contained Core Data Analysis: Summarization, Correlation and Visualization is aimed at those who are eager to participate in developing the field as well as appealing to novices and practitioners.
As explored in this open access book, higher education in STEM fields is influenced by many factors, including education research, government and school policies, financial considerations, technology limitations, and acceptance of innovations by faculty and students. In 2018, Drs. Ryoo and Winkelmann explored the opportunities, challenges, and future research initiatives of innovative learning environments (ILEs) in higher education STEM disciplines in their pioneering project: eXploring the Future of Innovative Learning Environments (X-FILEs). Workshop participants evaluated four main ILE categories: personalized and adaptive learning, multimodal learning formats, cross/extended reality (XR), and artificial intelligence (AI) and machine learning (ML). This open access book gathers the perspectives expressed during the X-FILEs workshop and its follow-up activities. It is designed to help inform education policy makers, researchers, developers, and practitioners about the adoption and implementation of ILEs in higher education.
This book provides a general introduction to Sequential Monte Carlo (SMC) methods, also known as particle filters. These methods have become a staple for the sequential analysis of data in such diverse fields as signal processing, epidemiology, machine learning, population ecology, quantitative finance, and robotics. The coverage is comprehensive, ranging from the underlying theory to computational implementation, methodology, and diverse applications in various areas of science. This is achieved by describing SMC algorithms as particular cases of a general framework, which involves concepts such as Feynman-Kac distributions, and tools such as importance sampling and resampling. This general framework is used consistently throughout the book. Extensive coverage is provided on sequential learning (filtering, smoothing) of state-space (hidden Markov) models, as this remains an important application of SMC methods. More recent applications, such as parameter estimation of these models (through e.g. particle Markov chain Monte Carlo techniques) and the simulation of challenging probability distributions (in e.g. Bayesian inference or rare-event problems), are also discussed. The book may be used either as a graduate text on Sequential Monte Carlo methods and state-space modeling, or as a general reference work on the area. Each chapter includes a set of exercises for self-study, a comprehensive bibliography, and a "Python corner," which discusses the practical implementation of the methods covered. In addition, the book comes with an open source Python library, which implements all the algorithms described in the book, and contains all the programs that were used to perform the numerical experiments.
Recent data shows that 87% of Artificial Intelligence/Big Data projects don't make it into production (VB Staff, 2019), meaning that most projects are never deployed. This book addresses five common pitfalls that prevent projects from reaching deployment and provides tools and methods to avoid those pitfalls. Along the way, stories from actual experience in building and deploying data science projects are shared to illustrate the methods and tools. While the book is primarily for data science practitioners, information for managers of data science practitioners is included in the Tips for Managers sections.
The second edition of a bestselling textbook, Using R for Introductory Statistics guides students through the basics of R, helping them overcome the sometimes steep learning curve. The author does this by breaking the material down into small, task-oriented steps. The second edition maintains the features that made the first edition so popular, while updating data, examples, and changes to R in line with the current version. See What's New in the Second Edition: Increased emphasis on more idiomatic R provides a grounding in the functionality of base R. Discussions of the use of RStudio helps new R users avoid as many pitfalls as possible. Use of knitr package makes code easier to read and therefore easier to reason about. Additional information on computer-intensive approaches motivates the traditional approach. Updated examples and data make the information current and topical. The book has an accompanying package, UsingR, available from CRAN, R's repository of user-contributed packages. The package contains the data sets mentioned in the text (data(package="UsingR")), answers to selected problems (answers()), a few demonstrations (demo()), the errata (errata()), and sample code from the text. The topics of this text line up closely with traditional teaching progression; however, the book also highlights computer-intensive approaches to motivate the more traditional approach. The authors emphasize realistic data and examples and rely on visualization techniques to gather insight. They introduce statistics and R seamlessly, giving students the tools they need to use R and the information they need to navigate the sometimes complex world of statistical computing.
This book presents an introduction to linear univariate and multivariate time series analysis, providing brief theoretical insights into each topic, and from the beginning illustrating the theory with software examples. As such, it quickly introduces readers to the peculiarities of each subject from both theoretical and the practical points of view. It also includes numerous examples and real-world applications that demonstrate how to handle different types of time series data. The associated software package, SSMMATLAB, is written in MATLAB and also runs on the free OCTAVE platform. The book focuses on linear time series models using a state space approach, with the Kalman filter and smoother as the main tools for model estimation, prediction and signal extraction. A chapter on state space models describes these tools and provides examples of their use with general state space models. Other topics discussed in the book include ARIMA; and transfer function and structural models; as well as signal extraction using the canonical decomposition in the univariate case, and VAR, VARMA, cointegrated VARMA, VARX, VARMAX, and multivariate structural models in the multivariate case. It also addresses spectral analysis, the use of fixed filters in a model-based approach, and automatic model identification procedures for ARIMA and transfer function models in the presence of outliers, interventions, complex seasonal patterns and other effects like Easter, trading day, etc. This book is intended for both students and researchers in various fields dealing with time series. The software provides numerous automatic procedures to handle common practical situations, but at the same time, readers with programming skills can write their own programs to deal with specific problems. Although the theoretical introduction to each topic is kept to a minimum, readers can consult the companion book 'Multivariate Time Series With Linear State Space Structure', by the same author, if they require more details.
Testing for economic convergence across countries has been a central issue in the literature of economic growth and development. This book introduces a modern framework to study the cross-country convergence dynamics in labor productivity and its proximate sources: capital accumulation and aggregate efficiency. In particular, recent convergence dynamics of developed as well as developing countries are evaluated through the lens of a non-linear dynamic factor model and a clustering algorithm for panel data. This framework allows us to examine key economic phenomena such as technological heterogeneity and multiple equilibria. In this context, the book provides a succinct review of the recent club convergence literature, a comparative view of developed and developing countries, and a tutorial on how to implement the club convergence framework in the statistical software Stata.
This Springer brief addresses the challenges encountered in the study of the optimization of time-nonhomogeneous Markov chains. It develops new insights and new methodologies for systems in which concepts such as stationarity, ergodicity, periodicity and connectivity do not apply. This brief introduces the novel concept of confluencity and applies a relative optimization approach. It develops a comprehensive theory for optimization of the long-run average of time-nonhomogeneous Markov chains. The book shows that confluencity is the most fundamental concept in optimization, and that relative optimization is more suitable for treating the systems under consideration than standard ideas of dynamic programming. Using confluencity and relative optimization, the author classifies states as confluent or branching and shows how the under-selectivity issue of the long-run average can be easily addressed, multi-class optimization implemented, and Nth biases and Blackwell optimality conditions derived. These results are presented in a book for the first time and so may enhance the understanding of optimization and motivate new research ideas in the area.
Computational techniques based on simulation have now become an essential part of the statistician's toolbox. It is thus crucial to provide statisticians with a practical understanding of those methods, and there is no better way to develop intuition and skills for simulation than to use simulation to solve statistical problems. Introducing Monte Carlo Methods with R covers the main tools used in statistical simulation from a programmer's point of view, explaining the R implementation of each simulation technique and providing the output for better understanding and comparison. While this book constitutes a comprehensive treatment of simulation methods, the theoretical justification of those methods has been considerably reduced, compared with Robert and Casella (2004). Similarly, the more exploratory and less stable solutions are not covered here. This book does not require a preliminary exposure to the R programming language or to Monte Carlo methods, nor an advanced mathematical background. While many examples are set within a Bayesian framework, advanced expertise in Bayesian statistics is not required. The book covers basic random generation algorithms, Monte Carlo techniques for integration and optimization, convergence diagnoses, Markov chain Monte Carlo methods, including Metropolis {Hastings and Gibbs algorithms, and adaptive algorithms. All chapters include exercises and all R programs are available as an R package called mcsm. The book appeals to anyone with a practical interest in simulation methods but no previous exposure. It is meant to be useful for students and practitioners in areas such as statistics, signal processing, communications engineering, control theory, econometrics, finance and more. The programming parts are introduced progressively to be accessible to any reader.
The objective of this text is to introduce RStudio to practitioners and students and enable them to use R in their everyday work. It is not a statistical textbook, the purpose is to transmit the joy of analyzing data with RStudio. Practitioners and students learn how RStudio can be installed and used, they learn to import data, write scripts and save working results. Furthermore, they learn to employ descriptive statistics and create graphics with RStudio. Additionally, it is shown how RStudio can be used to test hypotheses, run an analysis of variance and regressions. To deepen the learned content, tasks are included with the solutions provided at the end of the textbook. This textbook has been recommended and developed for university courses in Germany, Austria and Switzerland.
Now in its second edition, Text Analysis with R provides a practical introduction to computational text analysis using the open source programming language R. R is an extremely popular programming language, used throughout the sciences; due to its accessibility, R is now used increasingly in other research areas. In this volume, readers immediately begin working with text, and each chapter examines a new technique or process, allowing readers to obtain a broad exposure to core R procedures and a fundamental understanding of the possibilities of computational text analysis at both the micro and the macro scale. Each chapter builds on its predecessor as readers move from small scale "microanalysis" of single texts to large scale "macroanalysis" of text corpora, and each concludes with a set of practice exercises that reinforce and expand upon the chapter lessons. The book's focus is on making the technical palatable and making the technical useful and immediately gratifying. Text Analysis with R is written with students and scholars of literature in mind but will be applicable to other humanists and social scientists wishing to extend their methodological toolkit to include quantitative and computational approaches to the study of text. Computation provides access to information in text that readers simply cannot gather using traditional qualitative methods of close reading and human synthesis. This new edition features two new chapters: one that introduces dplyr and tidyr in the context of parsing and analyzing dramatic texts to extract speaker and receiver data, and one on sentiment analysis using the syuzhet package. It is also filled with updated material in every chapter to integrate new developments in the field, current practices in R style, and the use of more efficient algorithms.
Program for data analysis using R and learn practical skills to make your work more efficient. This revised book explores how to automate running code and the creation of reports to share your results, as well as writing functions and packages. It includes key R 4 features such as a new color palette for charts, an enhanced reference counting system, and normalization of matrix and array types where matrix objects now formally inherit from the array class, eliminating inconsistencies. Advanced R 4 Data Programming and the Cloud is not designed to teach advanced R programming nor to teach the theory behind statistical procedures. Rather, it is designed to be a practical guide moving beyond merely using R; it shows you how to program in R to automate tasks. This book will teach you how to manipulate data in modern R structures and includes connecting R to databases such as PostgreSQL, cloud services such as Amazon Web Services (AWS), and digital dashboards such as Shiny. Each chapter also includes a detailed bibliography with references to research articles and other resources that cover relevant conceptual and theoretical topics. What You Will Learn Write and document R functions using R 4 Make an R package and share it via GitHub or privately Add tests to R code to ensure it works as intended Use R to talk directly to databases and do complex data management Run R in the Amazon cloud Deploy a Shiny digital dashboard Generate presentation-ready tables and reports using R Who This Book Is For Working professionals, researchers, and students who are familiar with R and basic statistical techniques such as linear regression and who want to learn how to take their R coding and programming to the next level.
S-Plus is a first-rate graphical environment, used by thousands worldwide to perform basic, intermediate and advanced statistical analysis. It is remarkably powerful, yet relatively simple to use, once you have the basics at your fingertips. Statistical Computing: An Introduction to Data Analysis using S-Plus provides a pragmatic introduction to analysing data using S-Plus, whilst covering a huge breadth of topics, and assuming minimal statistical knowledge.
|
![]() ![]() You may like...
His Good Name - Essays on Identity and…
Christina Geisen, Jean Li, …
Hardcover
R4,272
Discovery Miles 42 720
|