![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
This book offers a systematic and rigorous treatment of continuous-time Markov decision processes, covering both theory and possible applications to queueing systems, epidemiology, finance, and other fields. Unlike most books on the subject, much attention is paid to problems with functional constraints and the realizability of strategies. Three major methods of investigations are presented, based on dynamic programming, linear programming, and reduction to discrete-time problems. Although the main focus is on models with total (discounted or undiscounted) cost criteria, models with average cost criteria and with impulsive controls are also discussed in depth. The book is self-contained. A separate chapter is devoted to Markov pure jump processes and the appendices collect the requisite background on real analysis and applied probability. All the statements in the main text are proved in detail. Researchers and graduate students in applied probability, operational research, statistics and engineering will find this monograph interesting, useful and valuable.
Signals and Systems: A Primer with MATLAB (R) provides clear, interesting, and easy-to-understand coverage of continuous-time and discrete-time signals and systems. Each chapter opens with a historical profile or career talk, followed by an introduction that states the chapter objectives and links the chapter to the previous ones. All principles are presented in a lucid, logical, step-by-step approach. As much as possible, the authors avoid wordiness and detail overload that could hide concepts and impede understanding. In recognition of the requirements by the Accreditation Board for Engineering and Technology (ABET) on integrating computer tools, the use of MATLAB (R) is encouraged in a student-friendly manner. MATLAB is introduced in Appendix B and applied gradually throughout the book. Each illustrative example is immediately followed by a practice problem along with its answer. Students can follow the example step by step to solve the practice problem without flipping pages or looking at the end of the book for answers. These practice problems test students' comprehension and reinforce key concepts before moving on to the next section. Toward the end of each chapter, the authors discuss some application aspects of the concepts covered in the chapter. The material covered in the chapter is applied to at least one or two practical problems or devices. This helps students see how the concepts are applied to real-life situations. In addition, thoroughly worked examples are given liberally at the end of every section. These examples give students a solid grasp of the solutions as well as the confidence to solve similar problems themselves. Some of the problems are solved in two or three ways to facilitate a deeper understanding and comparison of different approaches. Ten review questions in the form of multiple-choice objective items are provided at the end of each chapter with answers. The review questions are intended to cover the "little tricks" that the examples and end-of-chapter problems may not cover. They serve as a self-test device and help students determine chapter mastery. Each chapter also ends with a summary of key points and formulas. Designed for a three-hour semester course on signals and systems, Signals and Systems: A Primer with MATLAB (R) is intended as a textbook for junior-level undergraduate students in electrical and computer engineering. The prerequisites for a course based on this book are knowledge of standard mathematics (including calculus and differential equations) and electric circuit analysis.
This book introduces readers to various signal processing models that have been used in analyzing periodic data, and discusses the statistical and computational methods involved. Signal processing can broadly be considered to be the recovery of information from physical observations. The received signals are usually disturbed by thermal, electrical, atmospheric or intentional interferences, and due to their random nature, statistical techniques play an important role in their analysis. Statistics is also used in the formulation of appropriate models to describe the behavior of systems, the development of appropriate techniques for estimation of model parameters and the assessment of the model performances. Analyzing different real-world data sets to illustrate how different models can be used in practice, and highlighting open problems for future research, the book is a valuable resource for senior undergraduate and graduate students specializing in mathematics or statistics.
A Tour of Data Science: Learn R and Python in Parallel covers the fundamentals of data science, including programming, statistics, optimization, and machine learning in a single short book. It does not cover everything, but rather, teaches the key concepts and topics in Data Science. It also covers two of the most popular programming languages used in Data Science, R and Python, in one source. Key features: Allows you to learn R and Python in parallel Cover statistics, programming, optimization and predictive modelling, and the popular data manipulation tools - data.table and pandas Provides a concise and accessible presentation Includes machine learning algorithms implemented from scratch, linear regression, lasso, ridge, logistic regression, gradient boosting trees, etc. Appealing to data scientists, statisticians, quantitative analysts, and others who want to learn programming with R and Python from a data science perspective.
This book is a result of a workshop, the 8th of the successful TopoInVis workshop series, held in 2019 in Nykoeping, Sweden. The workshop regularly gathers some of the world's leading experts in this field. Thereby, it provides a forum for discussions on the latest advances in the field with a focus on finding practical solutions to open problems in topological data analysis for visualization. The contributions provide introductory and novel research articles including new concepts for the analysis of multivariate and time-dependent data, robust computational approaches for the extraction and approximations of topological structures with theoretical guarantees, and applications of topological scalar and vector field analysis for visualization. The applications span a wide range of scientific areas comprising climate science, material sciences, fluid dynamics, and astronomy. In addition, community efforts with respect to joint software development are reported and discussed.
A comprehensive guide to automated statistical data cleaning The production of clean data is a complex and time-consuming process that requires both technical know-how and statistical expertise. Statistical Data Cleaning brings together a wide range of techniques for cleaning textual, numeric or categorical data. This book examines technical data cleaning methods relating to data representation and data structure. A prominent role is given to statistical data validation, data cleaning based on predefined restrictions, and data cleaning strategy. Key features: Focuses on the automation of data cleaning methods, including both theory and applications written in R. Enables the reader to design data cleaning processes for either one-off analytical purposes or for setting up production systems that clean data on a regular basis. Explores statistical techniques for solving issues such as incompleteness, contradictions and outliers, integration of data cleaning components and quality monitoring. Supported by an accompanying website featuring data and R code. This book enables data scientists and statistical analysts working with data to deepen their understanding of data cleaning as well as to upgrade their practical data cleaning skills. It can also be used as material for a course in data cleaning and analyses.
This book presents a study of the COVID-19 pandemic using computational social scientific analysis that draws from, and employs, statistics and simulations. Combining approaches in crisis management, risk assessment and mathematical modelling, the work also draws from the philosophy of sacrifice and futurology. It makes an original contribution to the important issue of the stability of society by highlighting two significant factors: the COVID-19 crisis as a catalyst for change and the rise of AI and Big Data in managing society. It also emphasizes the nature and importance of sacrifices and the role of politics in the distribution of sacrifices. The book considers the treatment of AI and Big Data and their use to both "good" and "bad" ends, exposing the inevitability of these tools being used. Relevant to both policymakers and social scientists interested in the influence of AI and Big Data on the structure of society, the book re-evaluates the ways we think of lifestyles, economic systems and the balance of power in tandem with digital transformation.
Statistics is made simple with this award-winning guide to using R and applied statistical methods. With a clear step-by-step approach explained using real world examples, learn the practical skills you need to use statistical methods in your research from an expert with over 30 years of teaching experience. With a wealth of hands-on exercises and online resources created by the author, practice your skills using the data sets and R scripts from the book with detailed screencasts that accompany each script. This book is ideal for anyone looking to: * Complete an introductory course in statistics * Prepare for more advanced statistical courses * Gain the transferable analytical skills needed to interpret research from across the social sciences * Learn the technical skills needed to present data visually * Acquire a basic competence in the use of R and RStudio. This edition also includes a gentle introduction to Bayesian methods integrated throughout. The author has created a wide range of online resources, including: over 90 R scripts, 36 datasets, 37 screen casts, complete solutions for all exercises, and 130 multiple-choice questions to test your knowledge.
Bayes Factors for Forensic Decision Analyses with R provides a self-contained introduction to computational Bayesian statistics using R. With its primary focus on Bayes factors supported by data sets, this book features an operational perspective, practical relevance, and applicability-keeping theoretical and philosophical justifications limited. It offers a balanced approach to three naturally interrelated topics: Probabilistic Inference - Relies on the core concept of Bayesian inferential statistics, to help practicing forensic scientists in the logical and balanced evaluation of the weight of evidence. Decision Making - Features how Bayes factors are interpreted in practical applications to help address questions of decision analysis involving the use of forensic science in the law. Operational Relevance - Combines inference and decision, backed up with practical examples and complete sample code in R, including sensitivity analyses and discussion on how to interpret results in context. Over the past decades, probabilistic methods have established a firm position as a reference approach for the management of uncertainty in virtually all areas of science, including forensic science, with Bayes' theorem providing the fundamental logical tenet for assessing how new information-scientific evidence-ought to be weighed. Central to this approach is the Bayes factor, which clarifies the evidential meaning of new information, by providing a measure of the change in the odds in favor of a proposition of interest, when going from the prior to the posterior distribution. Bayes factors should guide the scientist's thinking about the value of scientific evidence and form the basis of logical and balanced reporting practices, thus representing essential foundations for rational decision making under uncertainty. This book would be relevant to students, practitioners, and applied statisticians interested in inference and decision analyses in the critical field of forensic science. It could be used to support practical courses on Bayesian statistics and decision theory at both undergraduate and graduate levels, and will be of equal interest to forensic scientists and practitioners of Bayesian statistics for driving their evaluations and the use of R for their purposes. This book is Open Access.
Understanding Regression Analysis unifies diverse regression applications including the classical model, ANOVA models, generalized models including Poisson, Negative binomial, logistic, and survival, neural networks, and decision trees under a common umbrella -- namely, the conditional distribution model. It explains why the conditional distribution model is the correct model, and it also explains (proves) why the assumptions of the classical regression model are wrong. Unlike other regression books, this one from the outset takes a realistic approach that all models are just approximations. Hence, the emphasis is to model Nature's processes realistically, rather than to assume (incorrectly) that Nature works in particular, constrained ways. Key features of the book include: Numerous worked examples using the R software Key points and self-study questions displayed "just-in-time" within chapters Simple mathematical explanations ("baby proofs") of key concepts Clear explanations and applications of statistical significance (p-values), incorporating the American Statistical Association guidelines Use of "data-generating process" terminology rather than "population" Random-X framework is assumed throughout (the fixed-X case is presented as a special case of the random-X case) Clear explanations of probabilistic modelling, including likelihood-based methods Use of simulations throughout to explain concepts and to perform data analyses This book has a strong orientation towards science in general, as well as chapter-review and self-study questions, so it can be used as a textbook for research-oriented students in the social, biological and medical, and physical and engineering sciences. As well, its mathematical emphasis makes it ideal for a text in mathematics and statistics courses. With its numerous worked examples, it is also ideally suited to be a reference book for all scientists.
Now in its second edition, this introductory statistics textbook conveys the essential concepts and tools needed to develop and nurture statistical thinking. It presents descriptive, inductive and explorative statistical methods and guides the reader through the process of quantitative data analysis. This revised and extended edition features new chapters on logistic regression, simple random sampling, including bootstrapping, and causal inference. The text is primarily intended for undergraduate students in disciplines such as business administration, the social sciences, medicine, politics, and macroeconomics. It features a wealth of examples, exercises and solutions with computer code in the statistical programming language R, as well as supplementary material that will enable the reader to quickly adapt the methods to their own applications.
This book gathers a selection of invited and contributed lectures from the European Conference on Numerical Mathematics and Advanced Applications (ENUMATH) held in Lausanne, Switzerland, August 26-30, 2013. It provides an overview of recent developments in numerical analysis, computational mathematics and applications from leading experts in the field. New results on finite element methods, multiscale methods, numerical linear algebra and discretization techniques for fluid mechanics and optics are presented. As such, the book offers a valuable resource for a wide range of readers looking for a state-of-the-art overview of advanced techniques, algorithms and results in numerical mathematics and scientific computing.
This book covers all the topics found in introductory descriptive statistics courses, including simple linear regression and time series analysis, the fundamentals of inferential statistics (probability theory, random sampling and estimation theory), and inferential statistics itself (confidence intervals, testing). Each chapter starts with the necessary theoretical background, which is followed by a variety of examples. The core examples are based on the content of the respective chapter, while the advanced examples, designed to deepen students' knowledge, also draw on information and material from previous chapters. The enhanced online version helps students grasp the complexity and the practical relevance of statistical analysis through interactive examples and is suitable for undergraduate and graduate students taking their first statistics courses, as well as for undergraduate students in non-mathematical fields, e.g. economics, the social sciences etc.
This advanced textbook explores small area estimation techniques, covers the underlying mathematical and statistical theory and offers hands-on support with their implementation. It presents the theory in a rigorous way and compares and contrasts various statistical methodologies, helping readers understand how to develop new methodologies for small area estimation. It also includes numerous sample applications of small area estimation techniques. The underlying R code is provided in the text and applied to four datasets that mimic data from labor markets and living conditions surveys, where the socioeconomic indicators include the small area estimation of total unemployment, unemployment rates, average annual household incomes and poverty indicators. Given its scope, the book will be useful for master and PhD students, and for official and other applied statisticians.
Straightforward, clear, and applied, this book will give you the theoretical and practical basis you need to apply data analysis techniques to real data. Combining key statistical concepts with detailed technical advice, it addresses common themes and problems presented by real research, and shows you how to adjust your techniques and apply your statistical knowledge to a range of datasets. It also embeds code and software output throughout and is supported by online resources to enable practice and safe experimentation. The book includes: * Original case studies and data sets * Practical exercises and lists of commands for each chapter * Downloadable Stata programmes created to work alongside chapters * A wide range of detailed applications using Stata * Step-by-step guidance on writing the relevant code. This is the perfect text for anyone doing statistical research in the social sciences getting started using Stata for data analysis.
This book highlights recent advances in natural computing, including biology and its theory, bio-inspired computing, computational aesthetics, computational models and theories, computing with natural media, philosophy of natural computing and educational technology. It presents extended versions of the best papers selected from the symposium "7th International Workshop on Natural Computing" (IWNC7), held in Tokyo, Japan, in 2013. The target audience is not limited to researchers working in natural computing but also those active in biological engineering, fine/media art design, aesthetics and philosophy.
This book explores missing data techniques and provides a detailed and easy-to-read introduction to multiple imputation, covering the theoretical aspects of the topic and offering hands-on help with the implementation. It discusses the pros and cons of various techniques and concepts, including multiple imputation quality diagnostics, an important topic for practitioners. It also presents current research and new, practically relevant developments in the field, and demonstrates the use of recent multiple imputation techniques designed for situations where distributional assumptions of the classical multiple imputation solutions are violated. In addition, the book features numerous practical tutorials for widely used R software packages to generate multiple imputations (norm, pan and mice). The provided R code and data sets allow readers to reproduce all the examples and enhance their understanding of the procedures. This book is intended for social and health scientists and other quantitative researchers who analyze incompletely observed data sets, as well as master's and PhD students with a sound basic knowledge of statistics.
Partial least squares structural equation modeling (PLS-SEM) has become a standard approach for analyzing complex inter-relationships between observed and latent variables. Researchers appreciate the many advantages of PLS-SEM such as the possibility to estimate very complex models and the method's flexibility in terms of data requirements and measurement specification. This practical open access guide provides a step-by-step treatment of the major choices in analyzing PLS path models using R, a free software environment for statistical computing, which runs on Windows, macOS, and UNIX computer platforms. Adopting the R software's SEMinR package, which brings a friendly syntax to creating and estimating structural equation models, each chapter offers a concise overview of relevant topics and metrics, followed by an in-depth description of a case study. Simple instructions give readers the "how-tos" of using SEMinR to obtain solutions and document their results. Rules of thumb in every chapter provide guidance on best practices in the application and interpretation of PLS-SEM.
Statistical Analysis of Questionnaires: A Unified Approach Based on R and Stata presents special statistical methods for analyzing data collected by questionnaires. The book takes an applied approach to testing and measurement tasks, mirroring the growing use of statistical methods and software in education, psychology, sociology, and other fields. It is suitable for graduate students in applied statistics and psychometrics and practitioners in education, health, and marketing. The book covers the foundations of classical test theory (CTT), test reliability, validity, and scaling as well as item response theory (IRT) fundamentals and IRT for dichotomous and polytomous items. The authors explore the latest IRT extensions, such as IRT models with covariates, multidimensional IRT models, IRT models for hierarchical and longitudinal data, and latent class IRT models. They also describe estimation methods and diagnostics, including graphical diagnostic tools, parametric and nonparametric tests, and differential item functioning. Stata and R software codes are included for each method. To enhance comprehension, the book employs real datasets in the examples and illustrates the software outputs in detail. The datasets are available on the authors' web page.
This book highlights the rise of the Strauss-Corbin-Gioia (SCG) methodology as an important paradigm in qualitative research in the social sciences, and demonstrates how the SCG methodology can be operationalized and enhanced using RQDA. It also provides a technical and methodological review of RQDA as a new CAQDAS tool. Covering various techniques, it offers methodological guidance on how to connect CAQDAS tool with accepted paradigms, particularly the SCG methodology, to produce high- quality qualitative research and includes step-by-step instructions on using RQDA under the SCG qualitative research paradigm. Lastly, it comprehensively discusses methodological issues in qualitative research. This book is useful for qualitative scholars, PhD/postdoctoral students and students taking qualitative methodology courses in the broader social sciences, and those who are familiar with programming languages and wish to cross over to qualitative data analysis. "At long last! We now have a qualitative data-analysis approach that enhances the use of a systematic methodology for conducting qualitative research. Chandra and Shang should be applauded for making our research lives a lot easier. And to top it all off, it's free." Dennis Gioia, Robert & Judith Auritt Klein Professor of Management, Smeal College of Business at Penn State University, USA "While we have a growing library of books on qualitative data analysis, this new volume provides a much needed new perspective. By combining a sophisticated understanding of qualitative research with an impressive command of R, the authors provide an important new toolkit for qualitative researchers that will improve the depth and rigor of their data analysis. And given that R is open source and freely available, their approach solves the all too common problem of access that arises from the prohibitive cost of more traditional qualitative data analysis software. Students and seasoned researchers alike should take note!" Nelson Phillips, Abu Dhabi Chamber Chair in Strategy and Innovation, Imperial College Business School, United Kingdom "This helpful book does what it sets out to do: offers a guide for systematizing and building a trail of evidence by integrating RQDA with the Gioia approach to analyzing inductive data. The authors provide easy-to-follow yet detailed instructions underpinned by sound logic, explanations and examples. The book makes me want to go back to my old data and start over!" Nicole Coviello, Lazaridis Research Professor, Wilfrid Laurier University, Canada "Qualitative Research Using R: A Systematic Approach guides aspiring researchers through the process of conducting a qualitative study with the assistance of the R programming language. It is the only textbook that offers "click-by-click" instruction in how to use RQDA software to carry out analysis. This book will undoubtedly serve as a useful resource for those interested in learning more about R as applied to qualitative or mixed methods data analysis. Helpful as well is the six-step procedure for carrying out a grounded-theory type study (the "Gioia approach") with the support of RQDA software, making it a comprehensive resource for those interested in innovative qualitative methods and uses of CAQDAS tools." Trena M. Paulus, Professor of Education, University of Georgia, USA
This book brings together selected peer-reviewed contributions from various research fields in statistics, and highlights the diverse approaches and analyses related to real-life phenomena. Major topics covered in this volume include, but are not limited to, bayesian inference, likelihood approach, pseudo-likelihoods, regression, time series, and data analysis as well as applications in the life and social sciences. The software packages used in the papers are made available by the authors. This book is a result of the 47th Scientific Meeting of the Italian Statistical Society, held at the University of Cagliari, Italy, in 2014.
Graphics are great for exploring data, but how can they be used for looking at the large datasets that are commonplace to-day? This book shows how to look at ways of visualizing large datasets, whether large in numbers of cases or large in numbers of variables or large in both. Data visualization is useful for data cleaning, exploring data, identifying trends and clusters, spotting local patterns, evaluating modeling output, and presenting results. It is essential for exploratory data analysis and data mining. Data analysts, statisticians, computer scientists-indeed anyone who has to explore a large dataset of their own-should benefit from reading this book. New approaches to graphics are needed to visualize the information in large datasets and most of the innovations described in this book are developments of standard graphics. There are considerable advantages in extending displays which are well-known and well-tried, both in understanding how best to make use of them in your work and in presenting results to others. It should also make the book readily accessible for readers who already have a little experience of drawing statistical graphics. All ideas are illustrated with displays from analyses of real datasets and the authors emphasize the importance of interpreting displays effectively. Graphics should be drawn to convey information and the book includes many insightful examples. From the reviews: "Anyone interested in modern techniques for visualizing data will be well rewarded by reading this book. There is a wealth of important plotting types and techniques." Paul Murrell for the Journal of Statistical Software, December 2006 "This fascinating book looks at the question of visualizing large datasets from many different perspectives. Different authors are responsible for different chapters and this approach works well in giving the reader alternative viewpoints of the same problem. Interestingly the authors have cleverly chosen a definition of 'large dataset'. Essentially they focus on datasets with the order of a million cases. As the authors point out there are now many examples of much larger datasets but by limiting to ones that can be loaded in their entirety in standard statistical software they end up with a book that has great utility to the practitioner rather than just the theorist. Another very attractive feature of the book is the many colour plates, showing clearly what can now routinely be seen on the computer screen. The interactive nature of data analysis with large datasets is hard to reproduce in a book but the authors make an excellent attempt to do just this." P. Marriott for the Short Book Reviews of the ISI |
![]() ![]() You may like...
Spinal Deformities in Adolescents…
Josette Bettany-Saltikov, Gokulakannan Kandasamy
Hardcover
R3,525
Discovery Miles 35 250
|