![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer software packages > Other software packages
Six Sigma statistical methodology using Minitab "Problem Solving and Data Analysis using Minitab "presents example-based learning to aid readers in understanding how to use MINITAB 16 for statistical analysis and problem solving. Each example and exercise is broken down into the exact steps that must be followed in order to take the reader through key learning points and work through complex analyses. Exercises are featured at the end of each example so that the reader can be assured that they have understood the key learning points. "Key features: "Provides readers with a step by step guide to problem solving and statistical analysis using Minitab 16 which is also compatible with version 15.Includes fully worked examples with graphics showing menu selections and Minitab outputs.Uses example based learning that the reader can work through at their pace.Contains hundreds of screenshots to aid the reader, along with explanations of the statistics being performed and interpretation of results.Presents the core statistical techniques used by Six Sigma Black Belts. Contains examples, exercises and solutions throughout, and is supported by an accompanying website featuring the numerous example data sets. Making Six Sigma statistical methodology accessible to beginners, this book is aimed at numerical professionals, students or academics who wish to learn and apply statistical techniques for problem solving, process improvement or data analysis whilst keeping mathematical theory to a minimum.
Growth curve models in longitudinal studies are widely used to model population size, body height, biomass, fungal growth, and other variables in the biological sciences, but these statistical methods for modeling growth curves and analyzing longitudinal data also extend to general statistics, economics, public health, demographics, epidemiology, SQC, sociology, nano-biotechnology, fluid mechanics, and other applied areas. There is no one-size-fits-all approach to growth measurement. The selected papers in this volume build on presentations from the GCM workshop held at the Indian Statistical Institute, Giridih, on March 28-29, 2016. They represent recent trends in GCM research on different subject areas, both theoretical and applied. This book includes tools and possibilities for further work through new techniques and modification of existing ones. The volume includes original studies, theoretical findings and case studies from a wide range of applied work, and these contributions have been externally refereed to the high quality standards of leading journals in the field.
This uniquely accessible book helps readers use CABology to solve real-world business problems and drive real competitive advantage. It provides reliable, concise information on the real benefits, usage and operationalization aspects of utilizing the "Trio Wave" of cloud, analytic and big data. Anyone who thinks that the game changing technology is slow paced needs to think again. This book opens readers' eyes to the fact that the dynamics of global technology and business are changing. Moreover, it argues that businesses must transform themselves in alignment with the Trio Wave if they want to survive and excel in the future. CABology focuses on the art and science of optimizing the business goals to deliver true value and benefits to the customer through cloud, analytic and big data. It offers business of all sizes a structured and comprehensive way of discovering the real benefits, usage and operationalization aspects of utilizing the Trio Wave.
This book features research contributions from The Abel Symposium on Statistical Analysis for High Dimensional Data, held in Nyvagar, Lofoten, Norway, in May 2014. The focus of the symposium was on statistical and machine learning methodologies specifically developed for inference in "big data" situations, with particular reference to genomic applications. The contributors, who are among the most prominent researchers on the theory of statistics for high dimensional inference, present new theories and methods, as well as challenging applications and computational solutions. Specific themes include, among others, variable selection and screening, penalised regression, sparsity, thresholding, low dimensional structures, computational challenges, non-convex situations, learning graphical models, sparse covariance and precision matrices, semi- and non-parametric formulations, multiple testing, classification, factor models, clustering, and preselection. Highlighting cutting-edge research and casting light on future research directions, the contributions will benefit graduate students and researchers in computational biology, statistics and the machine learning community.
This book presents a comprehensive study of multivariate time series with linear state space structure. The emphasis is put on both the clarity of the theoretical concepts and on efficient algorithms for implementing the theory. In particular, it investigates the relationship between VARMA and state space models, including canonical forms. It also highlights the relationship between Wiener-Kolmogorov and Kalman filtering both with an infinite and a finite sample. The strength of the book also lies in the numerous algorithms included for state space models that take advantage of the recursive nature of the models. Many of these algorithms can be made robust, fast, reliable and efficient. The book is accompanied by a MATLAB package called SSMMATLAB and a webpage presenting implemented algorithms with many examples and case studies. Though it lays a solid theoretical foundation, the book also focuses on practical application, and includes exercises in each chapter. It is intended for researchers and students working with linear state space models, and who are familiar with linear algebra and possess some knowledge of statistics.
This book is a selection of peer-reviewed contributions presented at the third Bayesian Young Statisticians Meeting, BAYSM 2016, Florence, Italy, June 19-21. The meeting provided a unique opportunity for young researchers, M.S. students, Ph.D. students, and postdocs dealing with Bayesian statistics to connect with the Bayesian community at large, to exchange ideas, and to network with others working in the same field. The contributions develop and apply Bayesian methods in a variety of fields, ranging from the traditional (e.g., biostatistics and reliability) to the most innovative ones (e.g., big data and networks).
This book offers an original and broad exploration of the fundamental methods in Clustering and Combinatorial Data Analysis, presenting new formulations and ideas within this very active field. With extensive introductions, formal and mathematical developments and real case studies, this book provides readers with a deeper understanding of the mutual relationships between these methods, which are clearly expressed with respect to three facets: logical, combinatorial and statistical. Using relational mathematical representation, all types of data structures can be handled in precise and unified ways which the author highlights in three stages: Clustering a set of descriptive attributes Clustering a set of objects or a set of object categories Establishing correspondence between these two dual clusterings Tools for interpreting the reasons of a given cluster or clustering are also included. Foundations and Methods in Combinatorial and Statistical Data Analysis and Clustering will be a valuable resource for students and researchers who are interested in the areas of Data Analysis, Clustering, Data Mining and Knowledge Discovery.
The advancement of computing and communication technologies have profoundly accelerated the development and deployment of complex enterprise systems, creating an importance in its implementation across corporate and industrial organizations worldwide.""The Handbook of Research on Enterprise Systems"" addresses the field of enterprise systems with more breadth and depth than any other resource, covering progressive technologies, leading theories, and advanced applications. Comprising over 25 articles from 47 expert authors from around the globe, this exhaustive collection of highly developed research extends the field of enterprise systems to offer libraries an unrivaled reference.This title features: 27 authoritative contributions by over 45 of the world's leading experts on enterprise systems from 16 countries; comprehensive coverage of each specific topic, highlighting recent trends and describing the latest advances in the field; more than 800 references to existing literature and research on enterprise systems; and, a compendium of over 200 key terms with detailed definitions. It is organized by topic and indexed, making it a convenient method of reference for all IT/IS scholars and professionals. It also features cross-referencing of key terms, figures, and information pertinent to enterprise systems.
This textbook examines empirical linguistics from a theoretical linguist's perspective. It provides both a theoretical discussion of what quantitative corpus linguistics entails and detailed, hands-on, step-by-step instructions to implement the techniques in the field. The statistical methodology and R-based coding from this book teach readers the basic and then more advanced skills to work with large data sets in their linguistics research and studies. Massive data sets are now more than ever the basis for work that ranges from usage-based linguistics to the far reaches of applied linguistics. This book presents much of the methodology in a corpus-based approach. However, the corpus-based methods in this book are also essential components of recent developments in sociolinguistics, historical linguistics, computational linguistics, and psycholinguistics. Material from the book will also be appealing to researchers in digital humanities and the many non-linguistic fields that use textual data analysis and text-based sensorimetrics. Chapters cover topics including corpus processing, frequencing data, and clustering methods. Case studies illustrate each chapter with accompanying data sets, R code, and exercises for use by readers. This book may be used in advanced undergraduate courses, graduate courses, and self-study.
This book covers original research and the latest advances in symbolic, algebraic and geometric computation; computational methods for differential and difference equations, symbolic-numerical computation; mathematics software design and implementation; and scientific and engineering applications based on features, invited talks, special sessions and contributed papers presented at the 9th (in Fukuoka, Japan in 2009) and 10th (in Beijing China in 2012) Asian Symposium on Computer Mathematics (ASCM). Thirty selected and refereed articles in the book present the conference participants' ideas and views on researching mathematics using computers.
This text presents a wide-ranging and rigorous overview of nearest neighbor methods, one of the most important paradigms in machine learning. Now in one self-contained volume, this book systematically covers key statistical, probabilistic, combinatorial and geometric ideas for understanding, analyzing and developing nearest neighbor methods. Gerard Biau is a professor at Universite Pierre et Marie Curie (Paris). Luc Devroye is a professor at the School of Computer Science at McGill University (Montreal).
This book is a comprehensive guide to qualitative comparative analysis (QCA) using R. Using Boolean algebra to implement principles of comparison used by scholars engaged in the qualitative study of macro social phenomena, QCA acts as a bridge between the quantitative and the qualitative traditions. The QCA package for R, created by the author, facilitates QCA within a graphical user interface. This book provides the most current information on the latest version of the QCA package, which combines written commands with a cross-platform interface. Beginning with a brief introduction to the concept of QCA, this book moves from theory to calibration, from analysis to factorization, and hits on all the key areas of QCA in between. Chapters one through three are introductory, familiarizing the reader with R, the QCA package, and elementary set theory. The next few chapters introduce important applications of the package beginning with calibration, analysis of necessity, analysis of sufficiency, parameters of fit, negation and factorization, and the construction of Venn diagrams. The book concludes with extensions to the classical package, including temporal applications and panel data. Providing a practical introduction to an increasingly important research tool for the social sciences, this book will be indispensable for students, scholars, and practitioners interested in conducting qualitative research in political science, sociology, business and management, and evaluation studies.
This volume collects selected, peer-reviewed contributions from the 2nd Conference of the International Society for Nonparametric Statistics (ISNPS), held in Cadiz (Spain) between June 11-16 2014, and sponsored by the American Statistical Association, the Institute of Mathematical Statistics, the Bernoulli Society for Mathematical Statistics and Probability, the Journal of Nonparametric Statistics and Universidad Carlos III de Madrid. The 15 articles are a representative sample of the 336 contributed papers presented at the conference. They cover topics such as high-dimensional data modelling, inference for stochastic processes and for dependent data, nonparametric and goodness-of-fit testing, nonparametric curve estimation, object-oriented data analysis, and semiparametric inference. The aim of the ISNPS 2014 conference was to bring together recent advances and trends in several areas of nonparametric statistics in order to facilitate the exchange of research ideas, promote collaboration among researchers from around the globe, and contribute to the further development of the field.
This volume conveys some of the surprises, puzzles and success stories in high-dimensional and complex data analysis and related fields. Its peer-reviewed contributions showcase recent advances in variable selection, estimation and prediction strategies for a host of useful models, as well as essential new developments in the field. The continued and rapid advancement of modern technology now allows scientists to collect data of increasingly unprecedented size and complexity. Examples include epigenomic data, genomic data, proteomic data, high-resolution image data, high-frequency financial data, functional and longitudinal data, and network data. Simultaneous variable selection and estimation is one of the key statistical problems involved in analyzing such big and complex data. The purpose of this book is to stimulate research and foster interaction between researchers in the area of high-dimensional data analysis. More concretely, its goals are to: 1) highlight and expand the breadth of existing methods in big data and high-dimensional data analysis and their potential for the advancement of both the mathematical and statistical sciences; 2) identify important directions for future research in the theory of regularization methods, in algorithmic development, and in methodologies for different application areas; and 3) facilitate collaboration between theoretical and subject-specific researchers.
Pulsar timing is a promising method for detecting gravitational waves in the nano-Hertz band. In his prize winning Ph.D. thesis Rutger van Haasteren deals with how one takes thousands of seemingly random timing residuals which are measured by pulsar observers, and extracts information about the presence and character of the gravitational waves in the nano-Hertz band that are washing over our Galaxy. The author presents a sophisticated mathematical algorithm that deals with this issue. His algorithm is probably the most well-developed of those that are currently in use in the Pulsar Timing Array community. In chapter 3, the gravitational-wave memory effect is described. This is one of the first descriptions of this interesting effect in relation with pulsar timing, which may become observable in future Pulsar Timing Array projects. The last part of the work is dedicated to an effort to combine the European pulsar timing data sets in order to search for gravitational waves. This study has placed the most stringent limit to date on the intensity of gravitational waves that are produced by pairs of supermassive black holes dancing around each other in distant galaxies, as well as those that may be produced by vibrating cosmic strings. Rutger van Haasteren has won the 2011 GWIC Thesis Prize of the Gravitational Wave International Community for his innovative work in various directions of the search for gravitational waves by pulsar timing. The work is presented in this Ph.D. thesis. |
You may like...
29th European Symposium on Computer…
Anton A Kiss, Edwin Zondervan, …
Hardcover
R11,317
Discovery Miles 113 170
Intelligent Edge Computing for Cyber…
D. Jude Hemanth, Bb Gupta, …
Paperback
R2,954
Discovery Miles 29 540
Biostatistics Manual for Health Research…
Nafis Faizi, Yasir Alvi
Paperback
R2,470
Discovery Miles 24 700
Data Communication and Computer Networks…
Jill West, Curt M. White
Paperback
|