![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
- includes MATLABr fundamentals, matrices, arrays, general graphics and specialized plots in quality assurance problems, script files, ordinary and partial differential equations - gives calculation of six sigma, total quality management, time series forecasting, reliability, process improvement, metrology, quality control and assurance, measurement and testing techniques - provides tools for graphical presentation, basic and special statistics and testing, ordinary and partial differential solvers, and fitting tools - includes comprehensive command information in tables Many books are available on MATLABr programming for engineers in general or in some specific area, but none in the highly topical field of quality assurance (QA). MATLABr in quality assurance sciences fills this gap as a compact guide for students, engineers, and scientists in this field. It concentrates on MATLABr fundamentals with examples of application to a wide range of current problems from general, nano and bio-technology, and statistical control, to medicine and industrial management. Examples cover both the school and advanced level; comprising calculations of total quality management, six sigma, time series, process improvement, metrology, quality control, human factors in quality assurance, measurement and testing techniques, quality project and function management, and customer satisfaction. The book covers key topics, including: the basics of software with examples; graphics and representations; numerical computation, scripts and functions for QA calculations; ODE and PDEPE solvers applied to QA problems; curve fitting and time series tool interfaces in calculations of quality; and statistics calculations applied to quality testing.
This book provides insights into important new developments in the area of statistical quality control and critically discusses methods used in on-line and off-line statistical quality control. The book is divided into three parts: Part I covers statistical process control, Part II deals with design of experiments, while Part III focuses on fields such as reliability theory and data quality. The 12th International Workshop on Intelligent Statistical Quality Control (Hamburg, Germany, August 16 - 19, 2016) was jointly organized by Professors Sven Knoth and Wolfgang Schmid. The contributions presented in this volume were carefully selected and reviewed by the conference's scientific program committee. Taken together, they bridge the gap between theory and practice, making the book of interest to both practitioners and researchers in the field of quality control.
Applied Statistics for Environmental Science with R presents the theory and application of statistical techniques in environmental science and aids researchers in choosing the appropriate statistical technique for analyzing their data. Focusing on the use of univariate and multivariate statistical methods, this book acts as a step-by-step resource to facilitate understanding in the use of R statistical software for interpreting data in the field of environmental science. Researchers utilizing statistical analysis in environmental science and engineering will find this book to be essential in solving their day-to-day research problems.
Immediately implementable code, with extensive and varied illustrations of graph variants and layouts. Examples and exercises across a variety of real-life contexts including business, politics, education, social media and crime investigation. Dedicated chapter on graph visualization methods. Practical walkthroughs of common methodological uses: finding influential actors in groups, discovering hidden community structures, facilitating diverse interaction in organizations, detecting political alignment, determining what influences connection and attachment. Various downloadable data sets for use both in class and individual learning projects. Final chapter dedicated to individual or group project examples.
This advanced textbook explores small area estimation techniques, covers the underlying mathematical and statistical theory and offers hands-on support with their implementation. It presents the theory in a rigorous way and compares and contrasts various statistical methodologies, helping readers understand how to develop new methodologies for small area estimation. It also includes numerous sample applications of small area estimation techniques. The underlying R code is provided in the text and applied to four datasets that mimic data from labor markets and living conditions surveys, where the socioeconomic indicators include the small area estimation of total unemployment, unemployment rates, average annual household incomes and poverty indicators. Given its scope, the book will be useful for master and PhD students, and for official and other applied statisticians.
This contributed book focuses on major aspects of statistical quality control, shares insights into important new developments in the field, and adapts established statistical quality control methods for use in e.g. big data, network analysis and medical applications. The content is divided into two parts, the first of which mainly addresses statistical process control, also known as statistical process monitoring. In turn, the second part explores selected topics in statistical quality control, including measurement uncertainty analysis and data quality. The peer-reviewed contributions gathered here were originally presented at the 13th International Workshop on Intelligent Statistical Quality Control, ISQC 2019, held in Hong Kong on August 12-14, 2019. Taken together, they bridge the gap between theory and practice, making the book of interest to both practitioners and researchers in the field of statistical quality control.
Now in its second edition, this introductory statistics textbook conveys the essential concepts and tools needed to develop and nurture statistical thinking. It presents descriptive, inductive and explorative statistical methods and guides the reader through the process of quantitative data analysis. This revised and extended edition features new chapters on logistic regression, simple random sampling, including bootstrapping, and causal inference. The text is primarily intended for undergraduate students in disciplines such as business administration, the social sciences, medicine, politics, and macroeconomics. It features a wealth of examples, exercises and solutions with computer code in the statistical programming language R, as well as supplementary material that will enable the reader to quickly adapt the methods to their own applications.
Multilevel and Longitudinal Modeling with IBM SPSS, Third Edition, demonstrates how to use the multilevel and longitudinal modeling techniques available in IBM SPSS Versions 25-27. Annotated screenshots with all relevant output provide readers with a step-by-step understanding of each technique as they are shown how to navigate the program. Throughout, diagnostic tools, data management issues, and related graphics are introduced. SPSS commands show the flow of the menu structure and how to facilitate model building, while annotated syntax is also available for those who prefer this approach. Extended examples illustrating the logic of model development and evaluation are included throughout the book, demonstrating the context and rationale of the research questions and the steps around which the analyses are structured. The book opens with the conceptual and methodological issues associated with multilevel and longitudinal modeling, followed by a discussion of SPSS data management techniques that facilitate working with multilevel, longitudinal, or cross-classified data sets. The next few chapters introduce the basics of multilevel modeling, developing a multilevel model, extensions of the basic two-level model (e.g., three-level models, models for binary and ordinal outcomes), and troubleshooting techniques for everyday-use programming and modeling problems along with potential solutions. Models for investigating individual and organizational change are next developed, followed by models with multivariate outcomes and, finally, models with cross-classified and multiple membership data structures. The book concludes with thoughts about ways to expand on the various multilevel and longitudinal modeling techniques introduced and issues (e.g., missing data, sample weights) to keep in mind in conducting multilevel analyses. Key features of the third edition: Thoroughly updated throughout to reflect IBM SPSS Versions 26-27. Introduction to fixed-effects regression for examining change over time where random-effects modeling may not be an optimal choice. Additional treatment of key topics specifically aligned with multilevel modeling (e.g., models with binary and ordinal outcomes). Expanded coverage of models with cross-classified and multiple membership data structures. Added discussion on model checking for improvement (e.g., examining residuals, locating outliers). Further discussion of alternatives for dealing with missing data and the use of sample weights within multilevel data structures. Supported by online data sets, the book's practical approach makes it an essential text for graduate-level courses on multilevel, longitudinal, latent variable modeling, multivariate statistics, or advanced quantitative techniques taught in departments of business, education, health, psychology, and sociology. The book will also prove appealing to researchers in these fields. The book is designed to provide an excellent supplement to Heck and Thomas's An Introduction to Multilevel Modeling Techniques, Fourth Edition; however, it can also be used with any multilevel or longitudinal modeling book or as a stand-alone text.
This book is published open access under a CC BY 4.0 license. This book presents computer programming as a key method for solving mathematical problems. This second edition of the well-received book has been extensively revised: All code is now written in Python version 3.6 (no longer version 2.7). In addition, the two first chapters of the previous edition have been extended and split up into five new chapters, thus expanding the introduction to programming from 50 to 150 pages. Throughout the book, the explanations provided are now more detailed, previous examples have been modified, and new sections, examples and exercises have been added. Also, a number of small errors have been corrected. The book was inspired by the Springer book TCSE 6: A Primer on Scientific Programming with Python (by Langtangen), but the style employed is more accessible and concise, in keeping with the needs of engineering students. The book outlines the shortest possible path from no previous experience with programming to a set of skills that allows students to write simple programs for solving common mathematical problems with numerical methods in the context of engineering and science courses. The emphasis is on generic algorithms, clean program design, the use of functions, and automatic tests for verification.
This book offers postgraduate and early career researchers in accounting and information systems a guide to choosing, executing and reporting appropriate data analysis methods to answer their research questions. It provides readers with a basic understanding of the steps that each method involves, and of the facets of the analysis that require special attention. Rather than presenting an exhaustive overview of the methods or explaining them in detail, the book serves as a starting point for developing data analysis skills: it provides hands-on guidelines for conducting the most common analyses and reporting results, and includes pointers to more extensive resources. Comprehensive yet succinct, the book is brief and written in a language that everyone can understand - from students to those employed by organizations wanting to study the context in which they work. It also serves as a refresher for researchers who have learned data analysis techniques previously but who need a reminder for the specific study they are involved in.
R Visualizations: Derive Meaning from Data focuses on one of the two major topics of data analytics: data visualization, a.k.a., computer graphics. In the book, major R systems for visualization are discussed, organized by topic and not by system. Anyone doing data analysis will be shown how to use R to generate any of the basic visualizations with the R visualization systems. Further, this book introduces the author's lessR system, which always can accomplish a visualization with less coding than the use of other systems, sometimes dramatically so, and also provides accompanying statistical analyses. Key Features Presents thorough coverage of the leading R visualization system, ggplot2. Gives specific guidance on using base R graphics to attain visualizations of the same quality as those provided by ggplot2. Shows how to create a wide range of data visualizations: distributions of categorical and continuous variables, many types of scatterplots including with a third variable, time series, and maps. Inclusion of the various approaches to R graphics organized by topic instead of by system. Presents the recent work on interactive visualization in R. David W. Gerbing received his PhD from Michigan State University in 1979 in quantitative analysis, and currently is a professor of quantitative analysis in the School of Business at Portland State University. He has published extensively in the social and behavioral sciences with a focus on quantitative methods. His lessR package has been in development since 2009.
Graphics are great for exploring data, but how can they be used for looking at the large datasets that are commonplace to-day? This book shows how to look at ways of visualizing large datasets, whether large in numbers of cases or large in numbers of variables or large in both. Data visualization is useful for data cleaning, exploring data, identifying trends and clusters, spotting local patterns, evaluating modeling output, and presenting results. It is essential for exploratory data analysis and data mining. Data analysts, statisticians, computer scientists-indeed anyone who has to explore a large dataset of their own-should benefit from reading this book. New approaches to graphics are needed to visualize the information in large datasets and most of the innovations described in this book are developments of standard graphics. There are considerable advantages in extending displays which are well-known and well-tried, both in understanding how best to make use of them in your work and in presenting results to others. It should also make the book readily accessible for readers who already have a little experience of drawing statistical graphics. All ideas are illustrated with displays from analyses of real datasets and the authors emphasize the importance of interpreting displays effectively. Graphics should be drawn to convey information and the book includes many insightful examples. From the reviews: "Anyone interested in modern techniques for visualizing data will be well rewarded by reading this book. There is a wealth of important plotting types and techniques." Paul Murrell for the Journal of Statistical Software, December 2006 "This fascinating book looks at the question of visualizing large datasets from many different perspectives. Different authors are responsible for different chapters and this approach works well in giving the reader alternative viewpoints of the same problem. Interestingly the authors have cleverly chosen a definition of 'large dataset'. Essentially they focus on datasets with the order of a million cases. As the authors point out there are now many examples of much larger datasets but by limiting to ones that can be loaded in their entirety in standard statistical software they end up with a book that has great utility to the practitioner rather than just the theorist. Another very attractive feature of the book is the many colour plates, showing clearly what can now routinely be seen on the computer screen. The interactive nature of data analysis with large datasets is hard to reproduce in a book but the authors make an excellent attempt to do just this." P. Marriott for the Short Book Reviews of the ISI
Easy Statistics for Food Science with R presents the application of statistical techniques to assist students and researchers who work in food science and food engineering in choosing the appropriate statistical technique. The book focuses on the use of univariate and multivariate statistical methods in the field of food science. The techniques are presented in a simplified form without relying on complex mathematical proofs. This book was written to help researchers from different fields to analyze their data and make valid decisions. The development of modern statistical packages makes the analysis of data easier than before. The book focuses on the application of statistics and correct methods for the analysis and interpretation of data. R statistical software is used throughout the book to analyze the data.
This book highlights the rise of the Strauss-Corbin-Gioia (SCG) methodology as an important paradigm in qualitative research in the social sciences, and demonstrates how the SCG methodology can be operationalized and enhanced using RQDA. It also provides a technical and methodological review of RQDA as a new CAQDAS tool. Covering various techniques, it offers methodological guidance on how to connect CAQDAS tool with accepted paradigms, particularly the SCG methodology, to produce high- quality qualitative research and includes step-by-step instructions on using RQDA under the SCG qualitative research paradigm. Lastly, it comprehensively discusses methodological issues in qualitative research. This book is useful for qualitative scholars, PhD/postdoctoral students and students taking qualitative methodology courses in the broader social sciences, and those who are familiar with programming languages and wish to cross over to qualitative data analysis. "At long last! We now have a qualitative data-analysis approach that enhances the use of a systematic methodology for conducting qualitative research. Chandra and Shang should be applauded for making our research lives a lot easier. And to top it all off, it's free." Dennis Gioia, Robert & Judith Auritt Klein Professor of Management, Smeal College of Business at Penn State University, USA "While we have a growing library of books on qualitative data analysis, this new volume provides a much needed new perspective. By combining a sophisticated understanding of qualitative research with an impressive command of R, the authors provide an important new toolkit for qualitative researchers that will improve the depth and rigor of their data analysis. And given that R is open source and freely available, their approach solves the all too common problem of access that arises from the prohibitive cost of more traditional qualitative data analysis software. Students and seasoned researchers alike should take note!" Nelson Phillips, Abu Dhabi Chamber Chair in Strategy and Innovation, Imperial College Business School, United Kingdom "This helpful book does what it sets out to do: offers a guide for systematizing and building a trail of evidence by integrating RQDA with the Gioia approach to analyzing inductive data. The authors provide easy-to-follow yet detailed instructions underpinned by sound logic, explanations and examples. The book makes me want to go back to my old data and start over!" Nicole Coviello, Lazaridis Research Professor, Wilfrid Laurier University, Canada "Qualitative Research Using R: A Systematic Approach guides aspiring researchers through the process of conducting a qualitative study with the assistance of the R programming language. It is the only textbook that offers "click-by-click" instruction in how to use RQDA software to carry out analysis. This book will undoubtedly serve as a useful resource for those interested in learning more about R as applied to qualitative or mixed methods data analysis. Helpful as well is the six-step procedure for carrying out a grounded-theory type study (the "Gioia approach") with the support of RQDA software, making it a comprehensive resource for those interested in innovative qualitative methods and uses of CAQDAS tools." Trena M. Paulus, Professor of Education, University of Georgia, USA
This book brings together selected peer-reviewed contributions from various research fields in statistics, and highlights the diverse approaches and analyses related to real-life phenomena. Major topics covered in this volume include, but are not limited to, bayesian inference, likelihood approach, pseudo-likelihoods, regression, time series, and data analysis as well as applications in the life and social sciences. The software packages used in the papers are made available by the authors. This book is a result of the 47th Scientific Meeting of the Italian Statistical Society, held at the University of Cagliari, Italy, in 2014.
A Strong Practical Focus on Applications and AlgorithmsComputational Statistics Handbook with MATLAB (R), Third Edition covers today's most commonly used techniques in computational statistics while maintaining the same philosophy and writing style of the bestselling previous editions. The text keeps theoretical concepts to a minimum, emphasizing the implementation of the methods. New to the Third EditionThis third edition is updated with the latest version of MATLAB and the corresponding version of the Statistics and Machine Learning Toolbox. It also incorporates new sections on the nearest neighbor classifier, support vector machines, model checking and regularization, partial least squares regression, and multivariate adaptive regression splines. Web ResourceThe authors include algorithmic descriptions of the procedures as well as examples that illustrate the use of algorithms in data analysis. The MATLAB code, examples, and data sets are available online.
Focused on practical matters: this book will not cover Shiny concepts, but practical tools and methodologies to use for production. Based on experience: this book will be a formalization of several years of experience building Shiny applications. Original content: this book will present new methodology and tooling, not just do a review of what already exists.
Partial least squares structural equation modeling (PLS-SEM) has become a standard approach for analyzing complex inter-relationships between observed and latent variables. Researchers appreciate the many advantages of PLS-SEM such as the possibility to estimate very complex models and the method's flexibility in terms of data requirements and measurement specification. This practical open access guide provides a step-by-step treatment of the major choices in analyzing PLS path models using R, a free software environment for statistical computing, which runs on Windows, macOS, and UNIX computer platforms. Adopting the R software's SEMinR package, which brings a friendly syntax to creating and estimating structural equation models, each chapter offers a concise overview of relevant topics and metrics, followed by an in-depth description of a case study. Simple instructions give readers the "how-tos" of using SEMinR to obtain solutions and document their results. Rules of thumb in every chapter provide guidance on best practices in the application and interpretation of PLS-SEM.
A unique point of this book is its low threshold, textually simple and at the same time full of self-assessment opportunities. Other unique points are the succinctness of the chapters with 3 to 6 pages, the presence of entire-commands-texts of the statistical methodologies reviewed and the fact that dull scientific texts imposing an unnecessary burden on busy and jaded professionals have been left out. For readers requesting more background, theoretical and mathematical information a note section with references is in each chapter. The first edition in 2010 was the first publication of a complete overview of SPSS methodologies for medical and health statistics. Well over 100,000 copies of various chapters were sold within the first year of publication. Reasons for a rewrite were four. First, many important comments from readers urged for a rewrite. Second, SPSS has produced many updates and upgrades, with relevant novel and improved methodologies. Third, the authors felt that the chapter texts needed some improvements for better readability: chapters have now been classified according the outcome data helpful for choosing your analysis rapidly, a schematic overview of data, and explanatory graphs have been added. Fourth, current data are increasingly complex and many important methods for analysis were missing in the first edition. For that latter purpose some more advanced methods seemed unavoidable, like hierarchical loglinear methods, gamma and Tweedie regressions and random intercept analyses. In order for the contents of the book to remain covered by the title, the authors renamed the book: SPSS for Starters and 2nd Levelers. Special care was, nonetheless, taken to keep things as simple as possible, simple menu commands are given. The arithmetic is still of a no-more-than high-school level. Step-by-step analyses of different statistical methodologies are given with the help of 60 SPSS data files available through the internet. Because of the lack of time of this busy group of people, the authors have given every effort to produce a text as succinct as possible.
This book presents the latest research on the statistical analysis of functional, high-dimensional and other complex data, addressing methodological and computational aspects, as well as real-world applications. It covers topics like classification, confidence bands, density estimation, depth, diagnostic tests, dimension reduction, estimation on manifolds, high- and infinite-dimensional statistics, inference on functional data, networks, operatorial statistics, prediction, regression, robustness, sequential learning, small-ball probability, smoothing, spatial data, testing, and topological object data analysis, and includes applications in automobile engineering, criminology, drawing recognition, economics, environmetrics, medicine, mobile phone data, spectrometrics and urban environments. The book gathers selected, refereed contributions presented at the Fifth International Workshop on Functional and Operatorial Statistics (IWFOS) in Brno, Czech Republic. The workshop was originally to be held on June 24-26, 2020, but had to be postponed as a consequence of the COVID-19 pandemic. Initiated by the Working Group on Functional and Operatorial Statistics at the University of Toulouse in 2008, the IWFOS workshops provide a forum to discuss the latest trends and advances in functional statistics and related fields, and foster the exchange of ideas and international collaboration in the field.
The purpose of this book is to thoroughly prepare the reader for applied research in clustering. Cluster analysis comprises a class of statistical techniques for classifying multivariate data into groups or clusters based on their similar features. Clustering is nowadays widely used in several domains of research, such as social sciences, psychology, and marketing, highlighting its multidisciplinary nature. This book provides an accessible and comprehensive introduction to clustering and offers practical guidelines for applying clustering tools by carefully chosen real-life datasets and extensive data analyses. The procedures addressed in this book include traditional hard clustering methods and up-to-date developments in soft clustering. Attention is paid to practical examples and applications through the open source statistical software R. Commented R code and output for conducting, step by step, complete cluster analyses are available. The book is intended for researchers interested in applying clustering methods. Basic notions on theoretical issues and on R are provided so that professionals as well as novices with little or no background in the subject will benefit from the book.
This book provides a general framework for learning sparse graphical models with conditional independence tests. It includes complete treatments for Gaussian, Poisson, multinomial, and mixed data; unified treatments for covariate adjustments, data integration, and network comparison; unified treatments for missing data and heterogeneous data; efficient methods for joint estimation of multiple graphical models; effective methods of high-dimensional variable selection; and effective methods of high-dimensional inference. The methods possess an embarrassingly parallel structure in performing conditional independence tests, and the computation can be significantly accelerated by running in parallel on a multi-core computer or a parallel architecture. This book is intended to serve researchers and scientists interested in high-dimensional statistics, and graduate students in broad data science disciplines. Key Features: A general framework for learning sparse graphical models with conditional independence tests Complete treatments for different types of data, Gaussian, Poisson, multinomial, and mixed data Unified treatments for data integration, network comparison, and covariate adjustment Unified treatments for missing data and heterogeneous data Efficient methods for joint estimation of multiple graphical models Effective methods of high-dimensional variable selection Effective methods of high-dimensional inference |
You may like...
Personal Injury and Damage Ascertainment…
Santo Davide Ferrara, Rafael Boscolo-Berto, …
Hardcover
R4,367
Discovery Miles 43 670
Governing Groundwater - Between Law and…
Gabriela Cuadrado-Quesada
Hardcover
R3,106
Discovery Miles 31 060
Enforcing Ecocide - Power, Policing…
Alexander Dunlap, Andrea Brock
Hardcover
R3,357
Discovery Miles 33 570
|