![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer software packages > Other software packages
The main focus of this book is on presenting advances in fuzzy statistics, and on proposing a methodology for testing hypotheses in the fuzzy environment based on the estimation of fuzzy confidence intervals, a context in which not only the data but also the hypotheses are considered to be fuzzy. The proposed method for estimating these intervals is based on the likelihood method and employs the bootstrap technique. A new metric generalizing the signed distance measure is also developed. In turn, the book presents two conceptually diverse applications in which defended intervals play a role: one is a novel methodology for evaluating linguistic questionnaires developed at the global and individual levels; the other is an extension of the multi-ways analysis of variance to the space of fuzzy sets. To illustrate these approaches, the book presents several empirical and simulation-based studies with synthetic and real data sets. In closing, it presents a coherent R package called "FuzzySTs" which covers all the previously mentioned concepts with full documentation and selected use cases. Given its scope, the book will be of interest to all researchers whose work involves advanced fuzzy statistical methods.
Discover what you can do with R! Introducing the R system, covering standard regression methods, then tackling more advanced topics, this book guides users through the practical, powerful tools that the R system provides. The emphasis is on hands-on analysis, graphical display, and interpretation of data. The many worked examples, from real-world research, are accompanied by commentary on what is done and why. The companion website has code and datasets, allowing readers to reproduce all analyses, along with solutions to selected exercises and updates. Assuming basic statistical knowledge and some experience with data analysis (but not R), the book is ideal for research scientists, final-year undergraduate or graduate-level students of applied statistics, and practising statisticians. It is both for learning and for reference. This third edition expands upon topics such as Bayesian inference for regression, errors in variables, generalized linear mixed models, and random forests.
The nonequilibrium behavior of nanoscopic and biological systems, which are typically strongly fluctuating, is a major focus of current research. Lately, much progress has been made in understanding such systems from a thermodynamic perspective. However, new theoretical challenges emerge when the fluctuating system is additionally subject to time delay, e.g. due to the presence of feedback loops. This thesis advances this young and vibrant research field in several directions. The first main contribution concerns the probabilistic description of time-delayed systems; e.g. by introducing a versatile approximation scheme for nonlinear delay systems. Second, it reveals that delay can induce intriguing thermodynamic properties such as anomalous (reversed) heat flow. More generally, the thesis shows how to treat the thermodynamics of non-Markovian systems by introducing auxiliary variables. It turns out that delayed feedback is inextricably linked to nonreciprocal coupling, information flow, and to net energy input on the fluctuating level.
This monograph uses the Julia language to guide the reader through an exploration of the fundamental concepts of probability and statistics, all with a view of mastering machine learning, data science, and artificial intelligence. The text does not require any prior statistical knowledge and only assumes a basic understanding of programming and mathematical notation. It is accessible to practitioners and researchers in data science, machine learning, bio-statistics, finance, or engineering who may wish to solidify their knowledge of probability and statistics. The book progresses through ten independent chapters starting with an introduction of Julia, and moving through basic probability, distributions, statistical inference, regression analysis, machine learning methods, and the use of Monte Carlo simulation for dynamic stochastic models. Ultimately this text introduces the Julia programming language as a computational tool, uniquely addressing end-users rather than developers. It makes heavy use of over 200 code examples to illustrate dozens of key statistical concepts. The Julia code, written in a simple format with parameters that can be easily modified, is also available for download from the book's associated GitHub repository online. See what co-creators of the Julia language are saying about the book: Professor Alan Edelman, MIT: With "Statistics with Julia", Yoni and Hayden have written an easy to read, well organized, modern introduction to statistics. The code may be looked at, and understood on the static pages of a book, or even better, when running live on a computer. Everything you need is here in one nicely written self-contained reference. Dr. Viral Shah, CEO of Julia Computing: Yoni and Hayden provide a modern way to learn statistics with the Julia programming language. This book has been perfected through iteration over several semesters in the classroom. It prepares the reader with two complementary skills - statistical reasoning with hands on experience and working with large datasets through training in Julia.
The book covers computational statistics, its methodologies and applications for IoT device. It includes the details in the areas of computational arithmetic and its influence on computational statistics, numerical algorithms in statistical application software, basics of computer systems, statistical techniques, linear algebra and its role in optimization techniques, evolution of optimization techniques, optimal utilization of computer resources, and statistical graphics role in data analysis. It also explores computational inferencing and computer model's role in design of experiments, Bayesian analysis, survival analysis and data mining in computational statistics.
This textbook presents the essential tools and core concepts of data science to public officials, policy analysts, and economists among others in order to further their application in the public sector. An expansion of the quantitative economics frameworks presented in policy and business schools, this book emphasizes the process of asking relevant questions to inform public policy. Its techniques and approaches emphasize data-driven practices, beginning with the basic programming paradigms that occupy the majority of an analyst's time and advancing to the practical applications of statistical learning and machine learning. The text considers two divergent, competing perspectives to support its applications, incorporating techniques from both causal inference and prediction. Additionally, the book includes open-sourced data as well as live code, written in R and presented in notebook form, which readers can use and modify to practice working with data.
Go from total MATLAB newbie to plotting graphs and solving equations in a flash! MATLAB is one of the most powerful and commonly used tools in the STEM field. But did you know it doesn't take an advanced degree or a ton of computer experience to learn it? MATLAB For Dummies is the roadmap you've been looking for to simplify and explain this feature-filled tool. This handy reference walks you through every step of the way as you learn the MATLAB language and environment inside-and-out. Starting with straightforward basics before moving on to more advanced material like Live Functions and Live Scripts, this easy-to-read guide shows you how to make your way around MATLAB with screenshots and newly updated procedures. It includes: A comprehensive introduction to installing MATLAB, using its interface, and creating and saving your first file Fully updated to include the 2020 and 2021 updates to MATLAB, with all-new screenshots and up-to-date procedures Enhanced debugging procedures and use of the Symbolic Math Toolbox Brand new instruction on working with Live Scripts and Live Functions, designing classes, creating apps, and building projects Intuitive walkthroughs for MATLAB's advanced features, including importing and exporting data and publishing your work Perfect for STEM students and new professionals ready to master one of the most powerful tools in the fields of engineering, mathematics, and computing, MATLAB For Dummies is the simplest way to go from complete newbie to power user faster than you would have thought possible.
This book provides an accessible introduction and practical guidelines to apply asymmetric multidimensional scaling, cluster analysis, and related methods to asymmetric one-mode two-way and three-way asymmetric data. A major objective of this book is to present to applied researchers a set of methods and algorithms for graphical representation and clustering of asymmetric relationships. Data frequently concern measurements of asymmetric relationships between pairs of objects from a given set (e.g., subjects, variables, attributes,...), collected in one or more matrices. Examples abound in many different fields such as psychology, sociology, marketing research, and linguistics and more recently several applications have appeared in technological areas including cybernetics, air traffic control, robotics, and network analysis. The capabilities of the presented algorithms are illustrated by carefully chosen examples and supported by extensive data analyses. A review of the specialized statistical software available for the applications is also provided. This monograph is highly recommended to readers who need a complete and up-to-date reference on methods for asymmetric proximity data analysis.
These lecture notes provide a rapid, accessible introduction to Bayesian statistical methods. The course covers the fundamental philosophy and principles of Bayesian inference, including the reasoning behind the prior/likelihood model construction synonymous with Bayesian methods, through to advanced topics such as nonparametrics, Gaussian processes and latent factor models. These advanced modelling techniques can easily be applied using computer code samples written in Python and Stan which are integrated into the main text. Importantly, the reader will learn methods for assessing model fit, and to choose between rival modelling approaches.
Refer to the practical guidance provided in this book to develop Salesforce custom applications in a more agile, collaborative, and resilient way using Salesforce Developer Experience (DX). You will learn how to use the Salesforce Command Line Interface (CLI) to simplify working with projects, metadata, data and orgs. The CLI integrates with your development tools of choice such as Visual Studio Code, and CI/CD tools to implement DevOps pipelines. Readers will also gain an understanding of the package development model, which improves application quality and maintainability by grouping metadata into highly cohesive, loosely coupled containers. Salesforce DX supports application development throughout the entire development lifecycle where a version control system, rather than a Salesforce org, is the source of truth. It became generally available in late 2017 and has now reached a stage of feature richness and stability that it is becoming more widely adopted. Beginning Salesforce DX provides development teams with practical, how-to examples of using Salesforce DX that go beyond the Salesforce documentation. Commands and their parameters are described, including any gotchas, and the outcome of the commands on a Salesforce org is explained. What You Will Learn * How to setup a Salesforce DX development environment * Understand the key Salesforce DX concepts and the Salesforce CLI * Work with Dev Hubs, projects, orgs, metadata and version control systems * Improve quality with test users and test data * Bootstrap pro-code development with templates * Apply Salesforce DX to an end-to-end package development project Who This Book Is For Internal teams developing custom Salesforce applications for an individual customer, or those creating commercial applications for distribution via the Salesforce AppExchange enterprise marketplace. All team disciplines will benefit from understanding and applying Salesforce DX, including pro-code, low-code and no-code developers, testers, release managers, DevOps engineers and administrators. A secondary audience includes those needing to understand key concepts when establishing or evolving an organisation's application lifecycle management capability, such as capability leaders, architects, consultants and business analysts.
Agile may be the best-kept management secret on the planet and if you want a quickstart introduction, then Agile NOW is essential reading. Agile is a different way of thinking that’s steeped in common sense and produces immediate results. That’s why there’s a quiet revolution going on. Agile will help you design better products, get faster results, cut down costs, and keep improving as you go. With a simple system called The Golden Triangle - Prioritising, Time Boxing and Change Management - you can hit the ground running and get started immediately. Agile NOW is slim, accessible and easy to dip into - yet covers all the essential theory and provides practical advice. Agile is for everyone - from one-person start-ups to multinationals – the promise of quicker, cheaper, better has universal appeal. Agile NOW shows you how to get going fast at minimal cost.
Modeling spatial and spatio-temporal continuous processes is an important and challenging problem in spatial statistics. Advanced Spatial Modeling with Stochastic Partial Differential Equations Using R and INLA describes in detail the stochastic partial differential equations (SPDE) approach for modeling continuous spatial processes with a Matern covariance, which has been implemented using the integrated nested Laplace approximation (INLA) in the R-INLA package. Key concepts about modeling spatial processes and the SPDE approach are explained with examples using simulated data and real applications. This book has been authored by leading experts in spatial statistics, including the main developers of the INLA and SPDE methodologies and the R-INLA package. It also includes a wide range of applications: * Spatial and spatio-temporal models for continuous outcomes * Analysis of spatial and spatio-temporal point patterns * Coregionalization spatial and spatio-temporal models * Measurement error spatial models * Modeling preferential sampling * Spatial and spatio-temporal models with physical barriers * Survival analysis with spatial effects * Dynamic space-time regression * Spatial and spatio-temporal models for extremes * Hurdle models with spatial effects * Penalized Complexity priors for spatial models All the examples in the book are fully reproducible. Further information about this book, as well as the R code and datasets used, is available from the book website at http://www.r-inla.org/spde-book. The tools described in this book will be useful to researchers in many fields such as biostatistics, spatial statistics, environmental sciences, epidemiology, ecology and others. Graduate and Ph.D. students will also find this book and associated files a valuable resource to learn INLA and the SPDE approach for spatial modeling.
This book illustrates the potential for computer simulation in the study of modern slavery and worker abuse, and by extension in all social issues. It lays out a philosophy of how agent-based modelling can be used in the social sciences. In addressing modern slavery, Chesney considers precarious work that is vulnerable to abuse, like sweat-shop labour and prostitution, and shows how agent modelling can be used to study, understand and fight abuse in these areas. He explores the philosophy, application and practice of agent modelling through the popular and free software NetLogo. This topical book is grounded in the technology needed to address the messy, chaotic, real world problems that humanity faces-in this case the serious problem of abuse at work-but equally in the social sciences which are needed to avoid the unintended consequences inherent to human responses. It includes a short but extensive NetLogo guide which readers can use to quickly learn this software and go on to develop complex models. This is an important book for students and researchers of computational social science and others interested in agent-based modelling.
Genstat 5 Release 3 is a version of the statistical system developed by practising statisticians at Rothamsted Experimental Station. It provides statistical summary, analysis, data-handling, and graphics for interactive or batch users, and includes a customizable menu-based interface. Genstat is used worldwide on personal computers, workstations, and mainframe computers by statisticians, research workers, and students in all fields of application of statistics. Release 3 contains many new facilities: the analysis of ordered categorical data: generalized additive models; combination of information in multi-stratum experimental designs; extensions to the REML (residual maximum-likelihood) algorithm for testing fixed effects and to cater for correlation strucgures between random effects; estimation of paramenters of statistical distributions; further probability functions; simplified data input; and many more extensions, in high-resolution graphics, for calculations, and for manipulation. The manual has been rewritten for this release, including new chapters on Basic Statistics and REML, with extensive examples and illustrations. The text is suitable for users of Genstat 5 i.e. statis
This book brings together two major trends: data science and blockchains. It is one of the first books to systematically cover the analytics aspects of blockchains, with the goal of linking traditional data mining research communities with novel data sources. Data science and big data technologies can be considered cornerstones of the data-driven digital transformation of organizations and society. The concept of blockchain is predicted to enable and spark transformation on par with that associated with the invention of the Internet. Cryptocurrencies are the first successful use case of highly distributed blockchains, like the world wide web was to the Internet. The book takes the reader through basic data exploration topics, proceeding systematically, method by method, through supervised and unsupervised learning approaches and information visualization techniques, all the way to understanding the blockchain data from the network science perspective. Chapters introduce the cryptocurrency blockchain data model and methods to explore it using structured query language, association rules, clustering, classification, visualization, and network science. Each chapter introduces basic concepts, presents examples with real cryptocurrency blockchain data and offers exercises and questions for further discussion. Such an approach intends to serve as a good starting point for undergraduate and graduate students to learn data science topics using cryptocurrency blockchain examples. It is also aimed at researchers and analysts who already possess good analytical and data skills, but who do not yet have the specific knowledge to tackle analytic questions about blockchain transactions. The readers improve their knowledge about the essential data science techniques in order to turn mere transactional information into social, economic, and business insights.
In statistics, fitting linear models to data is a general theme. This manual describes how GLIM 4--the popular software package--may be used for statistical analysis, including data manipulation and display, model fitting, and prediction. The manual has been divided into three distinct guides. The User Guide introduces and illustrates all the facilities in GLIM 4. Each chapter describes the directives relevant to a particular type of activity involved in the statistical modelling of data. The Modelling Guide presents a broad array of examples which comprise an effective introduction for new users. The Reference Guide contains a formal description of the syntax and semantics of the GLIM 4 language, of the data structures it handles, and of the directives provided, constituting a reference manual for the experienced user. This book is sure to be useful to research statisticians wherever GLIM is used.
The new edition of this book provides an easily accessible introduction to the statistical analysis of network data using R. It has been fully revised and can be used as a stand-alone resource in which multiple R packages are used to illustrate how to conduct a wide range of network analyses, from basic manipulation and visualization, to summary and characterization, to modeling of network data. The central package is igraph, which provides extensive capabilities for studying network graphs in R. The new edition of this book includes an overhaul to recent changes in igraph. The material in this book is organized to flow from descriptive statistical methods to topics centered on modeling and inference with networks, with the latter separated into two sub-areas, corresponding first to the modeling and inference of networks themselves, and then, to processes on networks. The book begins by covering tools for the manipulation of network data. Next, it addresses visualization and characterization of networks. The book then examines mathematical and statistical network modeling. This is followed by a special case of network modeling wherein the network topology must be inferred. Network processes, both static and dynamic are addressed in the subsequent chapters. The book concludes by featuring chapters on network flows, dynamic networks, and networked experiments. Statistical Analysis of Network Data with R, 2nd Ed. has been written at a level aimed at graduate students and researchers in quantitative disciplines engaged in the statistical analysis of network data, although advanced undergraduates already comfortable with R should find the book fairly accessible as well.
This contributed book focuses on major aspects of statistical quality control, shares insights into important new developments in the field, and adapts established statistical quality control methods for use in e.g. big data, network analysis and medical applications. The content is divided into two parts, the first of which mainly addresses statistical process control, also known as statistical process monitoring. In turn, the second part explores selected topics in statistical quality control, including measurement uncertainty analysis and data quality. The peer-reviewed contributions gathered here were originally presented at the 13th International Workshop on Intelligent Statistical Quality Control, ISQC 2019, held in Hong Kong on August 12-14, 2019. Taken together, they bridge the gap between theory and practice, making the book of interest to both practitioners and researchers in the field of statistical quality control.
This book discusses all major topics on survey sampling and estimation. It covers traditional as well as advanced sampling methods related to the spatial populations. The book presents real-world applications of major sampling methods and illustrates them with the R software. As a large sample size is not cost-efficient, this book introduces a new method by using the domain knowledge of the negative correlation between the variable of interest and the auxiliary variable in order to control the size of a sample. In addition, the book focuses on adaptive cluster sampling, rank-set sampling and their applications in real life. Advance methods discussed in the book have tremendous applications in ecology, environmental science, health science, forestry, bio-sciences, and humanities. This book is targeted as a text for undergraduate and graduate students of statistics, as well as researchers in various disciplines.
The most crucial ability for machine learning and data science is mathematical logic for grasping their essence rather than relying on knowledge or experience. This textbook addresses the fundamentals of kernel methods for machine learning by considering relevant math problems and building R programs. The book's main features are as follows: The content is written in an easy-to-follow and self-contained style. The book includes 100 exercises, which have been carefully selected and refined. As their solutions are provided in the main text, readers can solve all of the exercises by reading the book. The mathematical premises of kernels are proven and the correct conclusions are provided, helping readers to understand the nature of kernels. Source programs and running examples are presented to help readers acquire a deeper understanding of the mathematics used. Once readers have a basic understanding of the functional analysis topics covered in Chapter 2, the applications are discussed in the subsequent chapters. Here, no prior knowledge of mathematics is assumed. This book considers both the kernel for reproducing kernel Hilbert space (RKHS) and the kernel for the Gaussian process; a clear distinction is made between the two.
This introductory textbook presents research methods and data analysis tools in non-technical language. It explains the research process and the basics of qualitative and quantitative data analysis, including procedures and methods, analysis, interpretation, and applications using hands-on data examples in QDA Miner Lite and IBM SPSS Statistics software. The book is divided into four parts that address study and research design; data collection, qualitative methods and surveys; statistical methods, including hypothesis testing, regression, cluster and factor analysis; and reporting. The intended audience is business and social science students learning scientific research methods, however, given its business context, the book will be equally useful for decision-makers in businesses and organizations.
This book chronicles a 10-year introduction of blended learning into the delivery at a leading technological university, with a longstanding tradition of technology-enabled teaching and learning, and state-of-the-art infrastructure. Hence, both teachers and students were familiar with the idea of online courses. Despite this, the longitudinal experiment did not proceed as expected. Though few technical problems, it required behavioural changes from teachers and learners, thus unearthing a host of socio-technical issues, challenges, and conundrums. With the undercurrent of design ideals such as "tech for good", any industrial sector must examine whether digital platforms are credible substitutes or at best complementary. In this era of Industry 4.0, higher education, like any other industry, should not be about the creative destruction of what we value in universities, but their digital transformation. The book concludes with an agenda for large, repeatable Randomised Controlled Trials (RCTs) to validate digital platforms that could fulfil the aspirations of the key stakeholder groups - students, faculty, and regulators as well as delving into the role of Massive Open Online Courses (MOOCs) as surrogates for "fees-free" higher education and whether the design of such a HiEd 4.0 platform is even a credible proposition. Specifically, the book examines the data-driven evidence within a design-based research methodology to present outcomes of two alternative instructional designs evaluated - traditional lecturing and blended learning. Based on the research findings and statistical analysis, it concludes that the inexorable shift to online delivery of education must be guided by informed educational management and innovation.
This book provides a concise point of reference for the most commonly used regression methods. It begins with linear and nonlinear regression for normally distributed data, logistic regression for binomially distributed data, and Poisson regression and negative-binomial regression for count data. It then progresses to these regression models that work with longitudinal and multi-level data structures. The volume is designed to guide the transition from classical to more advanced regression modeling, as well as to contribute to the rapid development of statistics and data science. With data and computing programs available to facilitate readers' learning experience, Statistical Regression Modeling promotes the applications of R in linear, nonlinear, longitudinal and multi-level regression. All included datasets, as well as the associated R program in packages nlme and lme4 for multi-level regression, are detailed in Appendix A. This book will be valuable in graduate courses on applied regression, as well as for practitioners and researchers in the fields of data science, statistical analytics, public health, and related fields.
This book shows how information theory, probability, statistics, mathematics and personal computers can be applied to the exploration of numbers and proportions in music. It brings the methods of scientific and quantitative thinking to questions like: What are the ways of encoding a message in music and how can we be sure of the correct decoding? How do claims of names hidden in the notes of a score stand up to scientific analysis? How many ways are there of obtaining proportions and are they due to chance? After thoroughly exploring the ways of encoding information in music, the ambiguities of numerical alphabets and the words to be found "hidden" in a score, the book presents a novel way of exploring the proportions in a composition with a purpose-built computer program and gives example results from the application of the techniques. These include information theory, combinatorics, probability, hypothesis testing, Monte Carlo simulation and Bayesian networks, presented in an easily understandable form including their development from ancient history through the life and times of J. S. Bach, making connections between science, philosophy, art, architecture, particle physics, calculating machines and artificial intelligence. For the practitioner the book points out the pitfalls of various psychological fallacies and biases and includes succinct points of guidance for anyone involved in this type of research. This book will be useful to anyone who intends to use a scientific approach to the humanities, particularly music, and will appeal to anyone who is interested in the intersection between the arts and science.With a foreword by Ruth Tatlow (Uppsala University), award winning author of Bach's Numbers: Compositional Proportion and Significance and Bach and the Riddle of the Number Alphabet."With this study Alan Shepherd opens a much-needed examination of the wide range of mathematical claims that have been made about J. S. Bach's music, offering both tools and methodological cautions with the potential to help clarify old problems." Daniel R. Melamed, Professor of Music in Musicology, Indiana University |
You may like...
Star Wars: Episode 7 - The Force Awakens
Daisy Ridley, John Boyega, …
DVD
(10)
|