![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer software packages > Other software packages
Using the same accessible, hands-on approach as its best-selling predecessor, the Handbook of Univariate and Multivariate Data Analysis with IBM SPSS, Second Edition explains how to apply statistical tests to experimental findings, identify the assumptions underlying the tests, and interpret the findings. This second edition now covers more topics and has been updated with the SPSS statistical package for Windows. New to the Second Edition Three new chapters on multiple discriminant analysis, logistic regression, and canonical correlation New section on how to deal with missing data Coverage of tests of assumptions, such as linearity, outliers, normality, homogeneity of variance-covariance matrices, and multicollinearity Discussions of the calculation of Type I error and the procedure for testing statistical significance between two correlation coefficients obtained from two samples Expanded coverage of factor analysis, path analysis (test of the mediation hypothesis), and structural equation modeling Suitable for both newcomers and seasoned researchers in the social sciences, the handbook offers a clear guide to selecting the right statistical test, executing a wide range of univariate and multivariate statistical tests via the Windows and syntax methods, and interpreting the output results. The SPSS syntax files used for executing the statistical tests can be found in the appendix. Data sets employed in the examples are available on the book's CRC Press web page.
This book on statistical disclosure control presents the theory, applications and software implementation of the traditional approach to (micro)data anonymization, including data perturbation methods, disclosure risk, data utility, information loss and methods for simulating synthetic data. Introducing readers to the R packages sdcMicro and simPop, the book also features numerous examples and exercises with solutions, as well as case studies with real-world data, accompanied by the underlying R code to allow readers to reproduce all results. The demand for and volume of data from surveys, registers or other sources containing sensible information on persons or enterprises have increased significantly over the last several years. At the same time, privacy protection principles and regulations have imposed restrictions on the access and use of individual data. Proper and secure microdata dissemination calls for the application of statistical disclosure control methods to the da ta before release. This book is intended for practitioners at statistical agencies and other national and international organizations that deal with confidential data. It will also be interesting for researchers working in statistical disclosure control and the health sciences.
With recent advances in IT in areas such as AI and IoT, collaboration systems such as business chat, cloud services, conferencing systems, and unified communications are rapidly becoming widely used as new IT applications in global corporations' strategic activities. Through in-depth longitudinal studies of global corporations, the book presents a new theoretical framework and implications for IT-enabled dynamic capabilities using collaboration systems from the perspective of micro strategy theory and organization theory. The content of the book is based on longitudinal analyses that employ various qualitative research methods including ethnography, participant observation, action research and in-depth case studies of global corporations in Europe, the United States and Asia that actively use collaboration systems. It presents a new concept of micro dynamism whereby dynamic "IT-enabled knowledge communities" such as "IT-enabled communities of practice" and "IT-enabled strategic communities" create "IT-enabled dynamic capabilities" through the integration of four research streams - an information systems view, micro strategy view, micro organization view and knowledge-based view. The book demonstrates that collaboration systems create, maintain and develop "IT-enabled knowledge communities" within companies and are strategic IT applications for enhancing the competitiveness of companies in the ongoing creation of new innovation and the realization of sustainable growth in a 21st century knowledge-based society. This book is primarily written for academics, researchers and graduate students, but will also offer practical implications for business leaders and managers. Its use is anticipated not only in business and management schools, graduate schools and university education environments around the world but also in the broad business environment including management and leadership development training.
Matrix Algorithms in MATLAB focuses on the MATLAB code implementations of matrix algorithms. The MATLAB codes presented in the book are tested with thousands of runs of MATLAB randomly generated matrices, and the notation in the book follows the MATLAB style to ensure a smooth transition from formulation to the code, with MATLAB codes discussed in this book kept to within 100 lines for the sake of clarity. The book provides an overview and classification of the interrelations of various algorithms, as well as numerous examples to demonstrate code usage and the properties of the presented algorithms. Despite the wide availability of computer programs for matrix computations, it continues to be an active area of research and development. New applications, new algorithms, and improvements to old algorithms are constantly emerging.
Since 1984, Geophysical Data Analysis has filled the need for a short, concise reference on inverse theory for individuals who have an intermediate background in science and mathematics. The new edition maintains the accessible and succinct manner for which it is known, with the addition of: MATLAB examples and problem sets Advanced color graphics Coverage of new topics, including Adjoint Methods; Inversion by Steepest Descent, Monte Carlo and Simulated Annealing methods; and Bootstrap algorithm for determining empirical confidence intervals
The basics of computer algebra and the language of Mathematica are described. This title will lead toward an understanding of Mathematica that allows the reader to solve problems in physics, mathematics, and chemistry. Mathematica is the most widely used system for doing mathematical calculations by computer, including symbolic and numeric calculations and graphics. It is used in physics and other branches of science, in mathematics, education and many other areas. Many important results in physics would never be obtained without a wide use of computer algebra.
A Criminologist's Guide to R: Crime by the Numbers introduces the programming language R and covers the necessary skills to conduct quantitative research in criminology. By the end of this book, a person without any prior programming experience can take raw crime data, be able to clean it, visualize the data, present it using R Markdown, and change it to a format ready for analysis. A Criminologist's Guide to R focuses on skills specifically for criminology such as spatial joins, mapping, and scraping data from PDFs, however any social scientist looking for an introduction to R for data analysis will find this useful. Key Features: Introduction to RStudio including how to change user preference settings. Basic data exploration and cleaning - subsetting, loading data, regular expressions, aggregating data. Graphing with ggplot2. How to make maps (hotspot maps, choropleth maps, interactive maps). Webscraping and PDF scraping. Project management - how to prepare for a project, how to decide which projects to do, best ways to collaborate with people, how to store your code (using git), and how to test your code.
OCEB 2 Certification Guide, Second Edition has been updated to cover the new version 2 of the BPMN standard and delivers expert insight into BPM from one of the developers of the OCEB Fundamental exam, offering full coverage of the fundamental exam material for both the business and technical tracks to further certification. The first study guide prepares candidates to take-and pass-the OCEB Fundamental exam, explaining and building on basic concepts, focusing on key areas, and testing knowledge of all critical topics with sample questions and detailed answers. Suitable for practitioners, and those newer to the field, this book provides a solid grounding in business process management based on the authors' own extensive BPM consulting experiences.
This proceedings volume features top contributions in modern statistical methods from Statistics 2021 Canada, the 6th Annual Canadian Conference in Applied Statistics, held virtually on July 15-18, 2021. Papers are contributed from established and emerging scholars, covering cutting-edge and contemporary innovative techniques in statistics and data science. Major areas of contribution include Bayesian statistics; computational statistics; data science; semi-parametric regression; and stochastic methods in biology, crop science, ecology and engineering. It will be a valuable edited collection for graduate students, researchers, and practitioners in a wide array of applied statistical and data science methods.
An accessible introduction to the theoretical and computational aspects of linear algebra using MapleTM Many topics in linear algebra can be computationally intensive, and software programs often serve as important tools for understanding challenging concepts and visualizing the geometric aspects of the subject. Principles of Linear Algebra with Maple uniquely addresses the quickly growing intersection between subject theory and numerical computation, providing all of the commands required to solve complex and computationally challenging linear algebra problems using Maple. The authors supply an informal, accessible, and easy-to-follow treatment of key topics often found in a first course in linear algebra. Requiring no prior knowledge of the software, the book begins with an introduction to the commands and programming guidelines for working with Maple. Next, the book explores linear systems of equations and matrices, applications of linear systems and matrices, determinants, inverses, and Cramer's rule. Basic linear algebra topics such as vectors, dot product, cross product, and vector projection are explained, as well as the more advanced topics of rotations in space, rolling a circle along a curve, and the TNB Frame. Subsequent chapters feature coverage of linear transformations from Rn to Rm, the geometry of linear and affine transformations, least squares fits and pseudoinverses, and eigenvalues and eigenvectors. The authors explore several topics that are not often found in introductory linear algebra books, including sensitivity to error and the effects of linear and affine maps on the geometry of objects. The Maple software highlights the topic's visual nature, as the book is complete with numerous graphics in two and three dimensions, animations, symbolic manipulations, numerical computations, and programming. In addition, a related Web site features supplemental material, including Maple code for each chapter's problems, solutions, and color versions of the book's figures. Extensively class-tested to ensure an accessible presentation, Principles of Linear Algebra with Maple is an excellent book for courses on linear algebra at the undergraduate level. It is also an ideal reference for students and professionals who would like to gain a further understanding of the use of Maple to solve linear algebra problems.
Now in its second edition, Text Analysis with R provides a practical introduction to computational text analysis using the open source programming language R. R is an extremely popular programming language, used throughout the sciences; due to its accessibility, R is now used increasingly in other research areas. In this volume, readers immediately begin working with text, and each chapter examines a new technique or process, allowing readers to obtain a broad exposure to core R procedures and a fundamental understanding of the possibilities of computational text analysis at both the micro and the macro scale. Each chapter builds on its predecessor as readers move from small scale "microanalysis" of single texts to large scale "macroanalysis" of text corpora, and each concludes with a set of practice exercises that reinforce and expand upon the chapter lessons. The book's focus is on making the technical palatable and making the technical useful and immediately gratifying. Text Analysis with R is written with students and scholars of literature in mind but will be applicable to other humanists and social scientists wishing to extend their methodological toolkit to include quantitative and computational approaches to the study of text. Computation provides access to information in text that readers simply cannot gather using traditional qualitative methods of close reading and human synthesis. This new edition features two new chapters: one that introduces dplyr and tidyr in the context of parsing and analyzing dramatic texts to extract speaker and receiver data, and one on sentiment analysis using the syuzhet package. It is also filled with updated material in every chapter to integrate new developments in the field, current practices in R style, and the use of more efficient algorithms.
The ability to preserve electronic evidence is critical to presenting a solid case for civil litigation, as well as in criminal and regulatory investigations. Preserving Electronic Evidence for Trial provides everyone connected with digital forensics investigation and litigation with a clear and practical hands-on guide to the best practices in preserving electronic evidence. Corporate management personnel (legal & IT) and outside counsel need reliable processes for the litigation hold - identifying, locating, and preserving electronic evidence. Preserving Electronic Evidence for Trial provides the road map, showing you how to organize the digital evidence team before the crisis, not in the middle of litigation. This practice handbook by an internationally known digital forensics expert and an experienced litigator focuses on what corporate and litigation counsel as well as IT managers and forensic consultants need to know to communicate effectively about electronic evidence. You will find tips on how all your team members can get up to speed on each other's areas of specialization before a crisis arises. The result is a plan to effectively identify and pre-train the critical electronic-evidence team members. You will be ready to lead the team to success when a triggering event indicates that litigation is likely, by knowing what to ask in coordinating effectively with litigation counsel and forensic consultants throughout the litigation progress. Your team can also be ready for action in various business strategies, such as merger evaluation and non-litigation conflict resolution.
This book illustrates the potential for computer simulation in the study of modern slavery and worker abuse, and by extension in all social issues. It lays out a philosophy of how agent-based modelling can be used in the social sciences. In addressing modern slavery, Chesney considers precarious work that is vulnerable to abuse, like sweat-shop labour and prostitution, and shows how agent modelling can be used to study, understand and fight abuse in these areas. He explores the philosophy, application and practice of agent modelling through the popular and free software NetLogo. This topical book is grounded in the technology needed to address the messy, chaotic, real world problems that humanity faces-in this case the serious problem of abuse at work-but equally in the social sciences which are needed to avoid the unintended consequences inherent to human responses. It includes a short but extensive NetLogo guide which readers can use to quickly learn this software and go on to develop complex models. This is an important book for students and researchers of computational social science and others interested in agent-based modelling.
This Bayesian modeling book provides a self-contained entry to computational Bayesian statistics. Focusing on the most standard statistical models and backed up by real datasets and an all-inclusive R (CRAN) package called bayess, the book provides an operational methodology for conducting Bayesian inference, rather than focusing on its theoretical and philosophical justifications. Readers are empowered to participate in the real-life data analysis situations depicted here from the beginning. The stakes are high and the reader determines the outcome. Special attention is paid to the derivation of prior distributions in each case and specific reference solutions are given for each of the models. Similarly, computational details are worked out to lead the reader towards an effective programming of the methods given in the book. In particular, all R codes are discussed with enough detail to make them readily understandable and expandable. This works in conjunction with the bayess package. Bayesian Essentials with R can be used as a textbook at both undergraduate and graduate levels, as exemplified by courses given at Universite Paris Dauphine (France), University of Canterbury (New Zealand), and University of British Columbia (Canada). It is particularly useful with students in professional degree programs and scientists to analyze data the Bayesian way. The text will also enhance introductory courses on Bayesian statistics. Prerequisites for the book are an undergraduate background in probability and statistics, if not in Bayesian statistics. A strength of the text is the noteworthy emphasis on the role of models in statistical analysis. This is the new, fully-revised edition to the book Bayesian Core: A Practical Approach to Computational Bayesian Statistics. Jean-Michel Marin is Professor of Statistics at Universite Montpellier 2, France, and Head of the Mathematics and Modelling research unit. He has written over 40 papers on Bayesian methodology and computing, as well as worked closely with population geneticists over the past ten years. Christian Robert is Professor of Statistics at Universite Paris-Dauphine, France. He has written over 150 papers on Bayesian Statistics and computational methods and is the author or co-author of seven books on those topics, including The Bayesian Choice (Springer, 2001), winner of the ISBA DeGroot Prize in 2004. He is a Fellow of the Institute of Mathematical Statistics, the Royal Statistical Society and the American Statistical Society. He has been co-editor of the Journal of the Royal Statistical Society, Series B, and in the editorial boards of the Journal of the American Statistical Society, the Annals of Statistics, Statistical Science, and Bayesian Analysis. He is also a recipient of an Erskine Fellowship from the University of Canterbury (NZ) in 2006 and a senior member of the Institut Universitaire de France (2010-2015)."
With this practical guide, you'll learn how to understand the needs of external customers without requirements elicitation or sign-offs, the difference between customer and business value, and why you need to create both. You'll discover how to respond to changes in the market and the actions of competitors. You'll understand how to develop new products, launch them into the market, and how to deliver business outcomes through the maturity and eventual retirement of your product.
Applied Computing in Medicine and Health is a comprehensive presentation of on-going investigations into current applied computing challenges and advances, with a focus on a particular class of applications, primarily artificial intelligence methods and techniques in medicine and health. Applied computing is the use of practical computer science knowledge to enable use of the latest technology and techniques in a variety of different fields ranging from business to scientific research. One of the most important and relevant areas in applied computing is the use of artificial intelligence (AI) in health and medicine. Artificial intelligence in health and medicine (AIHM) is assuming the challenge of creating and distributing tools that can support medical doctors and specialists in new endeavors. The material included covers a wide variety of interdisciplinary perspectives concerning the theory and practice of applied computing in medicine, human biology, and health care. Particular attention is given to AI-based clinical decision-making, medical knowledge engineering, knowledge-based systems in medical education and research, intelligent medical information systems, intelligent databases, intelligent devices and instruments, medical AI tools, reasoning and metareasoning in medicine, and methodological, philosophical, ethical, and intelligent medical data analysis.
This book discusses a variety of methods for outlier ensembles and organizes them by the specific principles with which accuracy improvements are achieved. In addition, it covers the techniques with which such methods can be made more effective. A formal classification of these methods is provided, and the circumstances in which they work well are examined. The authors cover how outlier ensembles relate (both theoretically and practically) to the ensemble techniques used commonly for other data mining problems like classification. The similarities and (subtle) differences in the ensemble techniques for the classification and outlier detection problems are explored. These subtle differences do impact the design of ensemble algorithms for the latter problem. This book can be used for courses in data mining and related curricula. Many illustrative examples and exercises are provided in order to facilitate classroom teaching. A familiarity is assumed to the outlier detection problem and also to generic problem of ensemble analysis in classification. This is because many of the ensemble methods discussed in this book are adaptations from their counterparts in the classification domain. Some techniques explained in this book, such as wagging, randomized feature weighting, and geometric subsampling, provide new insights that are not available elsewhere. Also included is an analysis of the performance of various types of base detectors and their relative effectiveness. The book is valuable for researchers and practitioners for leveraging ensemble methods into optimal algorithmic design.
This book describes Python3 programming resources for implementing decision aiding algorithms in the context of a bipolar-valued outranking approach. These computing resources, made available under the name Digraph3, are useful in the field of Algorithmic Decision Theory and more specifically in outranking-based Multiple-Criteria Decision Aiding (MCDA). The first part of the book presents a set of tutorials introducing the Digraph3 collection of Python3 modules and its main objects, such as bipolar-valued digraphs and outranking digraphs. In eight methodological chapters, the second part illustrates multiple-criteria evaluation models and decision algorithms. These chapters are largely problem-oriented and demonstrate how to edit a new multiple-criteria performance tableau, how to build a best choice recommendation, how to compute the winner of an election and how to make rankings or ratings using incommensurable criteria. The book's third part presents three real-world decision case studies, while the fourth part addresses more advanced topics, such as computing ordinal correlations between bipolar-valued outranking digraphs, computing kernels in bipolar-valued digraphs, testing for confidence or stability of outranking statements when facing uncertain or solely ordinal criteria significance weights, and tempering plurality tyranny effects in social choice problems. The fifth and last part is more specifically focused on working with undirected graphs, tree graphs and forests. The closing chapter explores comparability, split, interval and permutation graphs. The book is primarily intended for graduate students in management sciences, computational statistics and operations research. The chapters presenting algorithms for ranking multicriteria performance records will be of computational interest for designers of web recommender systems. Similarly, the relative and absolute quantile-rating algorithms, discussed and illustrated in several chapters, will be of practical interest to public and private performance auditors.
This book features selected papers presented at the 2nd International Conference on Advanced Computing Technologies and Applications, held at SVKM's Dwarkadas J. Sanghvi College of Engineering, Mumbai, India, from 28 to 29 February 2020. Covering recent advances in next-generation computing, the book focuses on recent developments in intelligent computing, such as linguistic computing, statistical computing, data computing and ambient applications.
By the end of this book, the reader will understand: the difficulties of finding a needle in a haystack; creative solutions to address the problem; unique ways of engineering features and solving the problem of the lack of data (e.g. synthetic data). Additionally, the reader will be able to: avoid mistakes resulting from a lack of understanding; search for appropriate methods of feature engineering; locate the relevant technological solutions within the general context of decision-making.
Praise for the first edition: [This book] reflects the extensive experience and significant contributions of the author to non-linear and non-Gaussian modeling. ... [It] is a valuable book, especially with its broad and accessible introduction of models in the state-space framework. -Statistics in Medicine What distinguishes this book from comparable introductory texts is the use of state-space modeling. Along with this come a number of valuable tools for recursive filtering and smoothing, including the Kalman filter, as well as non-Gaussian and sequential Monte Carlo filters. -MAA Reviews Introduction to Time Series Modeling with Applications in R, Second Edition covers numerous stationary and nonstationary time series models and tools for estimating and utilizing them. The goal of this book is to enable readers to build their own models to understand, predict and master time series. The second edition makes it possible for readers to reproduce examples in this book by using the freely available R package TSSS to perform computations for their own real-world time series problems. This book employs the state-space model as a generic tool for time series modeling and presents the Kalman filter, the non-Gaussian filter and the particle filter as convenient tools for recursive estimation for state-space models. Further, it also takes a unified approach based on the entropy maximization principle and employs various methods of parameter estimation and model selection, including the least squares method, the maximum likelihood method, recursive estimation for state-space models and model selection by AIC. Along with the standard stationary time series models, such as the AR and ARMA models, the book also introduces nonstationary time series models such as the locally stationary AR model, the trend model, the seasonal adjustment model, the time-varying coefficient AR model and nonlinear non-Gaussian state-space models. About the Author: Genshiro Kitagawa is a project professor at the University of Tokyo, the former Director-General of the Institute of Statistical Mathematics, and the former President of the Research Organization of Information and Systems.
Nearly every large corporation and governmental agency is taking a fresh look at their current enterprise-scale business intelligence (BI) and data warehousing implementations at the dawn of the "Big Data Era"...and most see a critical need to revitalize their current capabilities. Whether they find the frustrating and business-impeding continuation of a long-standing "silos of data" problem, or an over-reliance on static production reports at the expense of predictive analytics and other true business intelligence capabilities, or a lack of progress in achieving the long-sought-after enterprise-wide "single version of the truth" - or all of the above - IT Directors, strategists, and architects find that they need to go back to the drawing board and produce a brand new BI/data warehousing roadmap to help move their enterprises from their current state to one where the promises of emerging technologies and a generation's worth of best practices can finally deliver high-impact, architecturally evolvable enterprise-scale business intelligence and data warehousing. Author Alan Simon, whose BI and data warehousing experience dates back to the late 1970s and who has personally delivered or led more than thirty enterprise-wide BI/data warehousing roadmap engagements since the mid-1990s, details a comprehensive step-by-step approach to building a best practices-driven, multi-year roadmap in the quest for architecturally evolvable BI and data warehousing at the enterprise scale. Simon addresses the triad of technology, work processes, and organizational/human factors considerations in a manner that blends the visionary and the pragmatic.
Improve Your Analytical Skills Incorporating the latest R packages as well as new case studies and applications, Using R and RStudio for Data Management, Statistical Analysis, and Graphics, Second Edition covers the aspects of R most often used by statistical analysts. New users of R will find the book's simple approach easy to understand while more sophisticated users will appreciate the invaluable source of task-oriented information. New to the Second Edition The use of RStudio, which increases the productivity of R users and helps users avoid error-prone cut-and-paste workflows New chapter of case studies illustrating examples of useful data management tasks, reading complex files, making and annotating maps, "scraping" data from the web, mining text files, and generating dynamic graphics New chapter on special topics that describes key features, such as processing by group, and explores important areas of statistics, including Bayesian methods, propensity scores, and bootstrapping New chapter on simulation that includes examples of data generated from complex models and distributions A detailed discussion of the philosophy and use of the knitr and markdown packages for R New packages that extend the functionality of R and facilitate sophisticated analyses Reorganized and enhanced chapters on data input and output, data management, statistical and mathematical functions, programming, high-level graphics plots, and the customization of plots Easily Find Your Desired Task Conveniently organized by short, clear descriptive entries, this edition continues to show users how to easily perform an analytical task in R. Users can quickly find and implement the material they need through the extensive indexing, cross-referencing, and worked examples in the text. Datasets and code are available for download on a supplementary website. |
You may like...
|