![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer software packages > Other software packages
Building the Agile Enterprise with Capabilities, Collaborations and Values, Second Edition covers advances that make technology more powerful and pervasive while, at the same time, improving alignment of technology with business. Using numerous examples, illustrations, and case studies, Fred Cummins, an industry expert, author and former fellow with EDS and Hewlett Packard, updates his first edition incorporating the following industry developments: The ubiquitous use of the Internet along with intelligent, mobile devices, which have enabled everyone and everything to be connected anytime, anywhere The emergence of a "business architecture" discipline that has driven improvements in business design and transformation practices The development of CMMN (Case Management Model and Notation) that will provide automation to support the collaboration of knowledge workers and managers The development of VDML (Value Delivery Modeling Language) that supports modeling of business design from a management perspective The importance of "big data" management and analysis as a new source of insight into evolution of the business and the ecosystem How the architecture of the agile enterprise and business modeling change enterprise governance, management and innovation Building the Agile Enterprise with Capabilities, Collaborations and Values, Second Edition is a must have reference for business leaders, CTOs; business architects, information systems architects and business process modeling professionals who wish to close the gap between strategic planning and business operations as well as the gap between business and IT and enhance the creation and delivery of business value.
The book covers computational statistics, its methodologies and applications for IoT device. It includes the details in the areas of computational arithmetic and its influence on computational statistics, numerical algorithms in statistical application software, basics of computer systems, statistical techniques, linear algebra and its role in optimization techniques, evolution of optimization techniques, optimal utilization of computer resources, and statistical graphics role in data analysis. It also explores computational inferencing and computer model's role in design of experiments, Bayesian analysis, survival analysis and data mining in computational statistics.
This book provides a general introduction to Sequential Monte Carlo (SMC) methods, also known as particle filters. These methods have become a staple for the sequential analysis of data in such diverse fields as signal processing, epidemiology, machine learning, population ecology, quantitative finance, and robotics. The coverage is comprehensive, ranging from the underlying theory to computational implementation, methodology, and diverse applications in various areas of science. This is achieved by describing SMC algorithms as particular cases of a general framework, which involves concepts such as Feynman-Kac distributions, and tools such as importance sampling and resampling. This general framework is used consistently throughout the book. Extensive coverage is provided on sequential learning (filtering, smoothing) of state-space (hidden Markov) models, as this remains an important application of SMC methods. More recent applications, such as parameter estimation of these models (through e.g. particle Markov chain Monte Carlo techniques) and the simulation of challenging probability distributions (in e.g. Bayesian inference or rare-event problems), are also discussed. The book may be used either as a graduate text on Sequential Monte Carlo methods and state-space modeling, or as a general reference work on the area. Each chapter includes a set of exercises for self-study, a comprehensive bibliography, and a "Python corner," which discusses the practical implementation of the methods covered. In addition, the book comes with an open source Python library, which implements all the algorithms described in the book, and contains all the programs that were used to perform the numerical experiments.
This book on statistical disclosure control presents the theory, applications and software implementation of the traditional approach to (micro)data anonymization, including data perturbation methods, disclosure risk, data utility, information loss and methods for simulating synthetic data. Introducing readers to the R packages sdcMicro and simPop, the book also features numerous examples and exercises with solutions, as well as case studies with real-world data, accompanied by the underlying R code to allow readers to reproduce all results. The demand for and volume of data from surveys, registers or other sources containing sensible information on persons or enterprises have increased significantly over the last several years. At the same time, privacy protection principles and regulations have imposed restrictions on the access and use of individual data. Proper and secure microdata dissemination calls for the application of statistical disclosure control methods to the da ta before release. This book is intended for practitioners at statistical agencies and other national and international organizations that deal with confidential data. It will also be interesting for researchers working in statistical disclosure control and the health sciences.
Stress Testing and Risk Integration in Banks provides a comprehensive view of the risk management activity by means of the stress testing process. An introduction to multivariate time series modeling paves the way to scenario analysis in order to assess a bank resilience against adverse macroeconomic conditions. Assets and liabilities are jointly studied to highlight the key issues that a risk manager needs to face. A multi-national bank prototype is used all over the book for diving into market, credit, and operational stress testing. Interest rate, liquidity and other major risks are also studied together with the former to outline how to implement a fully integrated risk management toolkit. Examples, business cases, and exercises worked in Matlab and R facilitate readers to develop their own models and methodologies.
The basics of computer algebra and the language of Mathematica are described. This title will lead toward an understanding of Mathematica that allows the reader to solve problems in physics, mathematics, and chemistry. Mathematica is the most widely used system for doing mathematical calculations by computer, including symbolic and numeric calculations and graphics. It is used in physics and other branches of science, in mathematics, education and many other areas. Many important results in physics would never be obtained without a wide use of computer algebra.
Matrix Algorithms in MATLAB focuses on the MATLAB code implementations of matrix algorithms. The MATLAB codes presented in the book are tested with thousands of runs of MATLAB randomly generated matrices, and the notation in the book follows the MATLAB style to ensure a smooth transition from formulation to the code, with MATLAB codes discussed in this book kept to within 100 lines for the sake of clarity. The book provides an overview and classification of the interrelations of various algorithms, as well as numerous examples to demonstrate code usage and the properties of the presented algorithms. Despite the wide availability of computer programs for matrix computations, it continues to be an active area of research and development. New applications, new algorithms, and improvements to old algorithms are constantly emerging.
Since 1984, Geophysical Data Analysis has filled the need for a short, concise reference on inverse theory for individuals who have an intermediate background in science and mathematics. The new edition maintains the accessible and succinct manner for which it is known, with the addition of: MATLAB examples and problem sets Advanced color graphics Coverage of new topics, including Adjoint Methods; Inversion by Steepest Descent, Monte Carlo and Simulated Annealing methods; and Bootstrap algorithm for determining empirical confidence intervals
OCEB 2 Certification Guide, Second Edition has been updated to cover the new version 2 of the BPMN standard and delivers expert insight into BPM from one of the developers of the OCEB Fundamental exam, offering full coverage of the fundamental exam material for both the business and technical tracks to further certification. The first study guide prepares candidates to take-and pass-the OCEB Fundamental exam, explaining and building on basic concepts, focusing on key areas, and testing knowledge of all critical topics with sample questions and detailed answers. Suitable for practitioners, and those newer to the field, this book provides a solid grounding in business process management based on the authors' own extensive BPM consulting experiences.
Genstat 5 Release 3 is a version of the statistical system developed by practising statisticians at Rothamsted Experimental Station. It provides statistical summary, analysis, data-handling, and graphics for interactive or batch users, and includes a customizable menu-based interface. Genstat is used worldwide on personal computers, workstations, and mainframe computers by statisticians, research workers, and students in all fields of application of statistics. Release 3 contains many new facilities: the analysis of ordered categorical data: generalized additive models; combination of information in multi-stratum experimental designs; extensions to the REML (residual maximum-likelihood) algorithm for testing fixed effects and to cater for correlation strucgures between random effects; estimation of paramenters of statistical distributions; further probability functions; simplified data input; and many more extensions, in high-resolution graphics, for calculations, and for manipulation. The manual has been rewritten for this release, including new chapters on Basic Statistics and REML, with extensive examples and illustrations. The text is suitable for users of Genstat 5 i.e. statis
This proceedings volume features top contributions in modern statistical methods from Statistics 2021 Canada, the 6th Annual Canadian Conference in Applied Statistics, held virtually on July 15-18, 2021. Papers are contributed from established and emerging scholars, covering cutting-edge and contemporary innovative techniques in statistics and data science. Major areas of contribution include Bayesian statistics; computational statistics; data science; semi-parametric regression; and stochastic methods in biology, crop science, ecology and engineering. It will be a valuable edited collection for graduate students, researchers, and practitioners in a wide array of applied statistical and data science methods.
With this practical guide, you'll learn how to understand the needs of external customers without requirements elicitation or sign-offs, the difference between customer and business value, and why you need to create both. You'll discover how to respond to changes in the market and the actions of competitors. You'll understand how to develop new products, launch them into the market, and how to deliver business outcomes through the maturity and eventual retirement of your product.
The ability to preserve electronic evidence is critical to presenting a solid case for civil litigation, as well as in criminal and regulatory investigations. Preserving Electronic Evidence for Trial provides everyone connected with digital forensics investigation and litigation with a clear and practical hands-on guide to the best practices in preserving electronic evidence. Corporate management personnel (legal & IT) and outside counsel need reliable processes for the litigation hold - identifying, locating, and preserving electronic evidence. Preserving Electronic Evidence for Trial provides the road map, showing you how to organize the digital evidence team before the crisis, not in the middle of litigation. This practice handbook by an internationally known digital forensics expert and an experienced litigator focuses on what corporate and litigation counsel as well as IT managers and forensic consultants need to know to communicate effectively about electronic evidence. You will find tips on how all your team members can get up to speed on each other's areas of specialization before a crisis arises. The result is a plan to effectively identify and pre-train the critical electronic-evidence team members. You will be ready to lead the team to success when a triggering event indicates that litigation is likely, by knowing what to ask in coordinating effectively with litigation counsel and forensic consultants throughout the litigation progress. Your team can also be ready for action in various business strategies, such as merger evaluation and non-litigation conflict resolution.
Praise for the first edition: [This book] reflects the extensive experience and significant contributions of the author to non-linear and non-Gaussian modeling. ... [It] is a valuable book, especially with its broad and accessible introduction of models in the state-space framework. -Statistics in Medicine What distinguishes this book from comparable introductory texts is the use of state-space modeling. Along with this come a number of valuable tools for recursive filtering and smoothing, including the Kalman filter, as well as non-Gaussian and sequential Monte Carlo filters. -MAA Reviews Introduction to Time Series Modeling with Applications in R, Second Edition covers numerous stationary and nonstationary time series models and tools for estimating and utilizing them. The goal of this book is to enable readers to build their own models to understand, predict and master time series. The second edition makes it possible for readers to reproduce examples in this book by using the freely available R package TSSS to perform computations for their own real-world time series problems. This book employs the state-space model as a generic tool for time series modeling and presents the Kalman filter, the non-Gaussian filter and the particle filter as convenient tools for recursive estimation for state-space models. Further, it also takes a unified approach based on the entropy maximization principle and employs various methods of parameter estimation and model selection, including the least squares method, the maximum likelihood method, recursive estimation for state-space models and model selection by AIC. Along with the standard stationary time series models, such as the AR and ARMA models, the book also introduces nonstationary time series models such as the locally stationary AR model, the trend model, the seasonal adjustment model, the time-varying coefficient AR model and nonlinear non-Gaussian state-space models. About the Author: Genshiro Kitagawa is a project professor at the University of Tokyo, the former Director-General of the Institute of Statistical Mathematics, and the former President of the Research Organization of Information and Systems.
This Bayesian modeling book provides a self-contained entry to computational Bayesian statistics. Focusing on the most standard statistical models and backed up by real datasets and an all-inclusive R (CRAN) package called bayess, the book provides an operational methodology for conducting Bayesian inference, rather than focusing on its theoretical and philosophical justifications. Readers are empowered to participate in the real-life data analysis situations depicted here from the beginning. The stakes are high and the reader determines the outcome. Special attention is paid to the derivation of prior distributions in each case and specific reference solutions are given for each of the models. Similarly, computational details are worked out to lead the reader towards an effective programming of the methods given in the book. In particular, all R codes are discussed with enough detail to make them readily understandable and expandable. This works in conjunction with the bayess package. Bayesian Essentials with R can be used as a textbook at both undergraduate and graduate levels, as exemplified by courses given at Universite Paris Dauphine (France), University of Canterbury (New Zealand), and University of British Columbia (Canada). It is particularly useful with students in professional degree programs and scientists to analyze data the Bayesian way. The text will also enhance introductory courses on Bayesian statistics. Prerequisites for the book are an undergraduate background in probability and statistics, if not in Bayesian statistics. A strength of the text is the noteworthy emphasis on the role of models in statistical analysis. This is the new, fully-revised edition to the book Bayesian Core: A Practical Approach to Computational Bayesian Statistics. Jean-Michel Marin is Professor of Statistics at Universite Montpellier 2, France, and Head of the Mathematics and Modelling research unit. He has written over 40 papers on Bayesian methodology and computing, as well as worked closely with population geneticists over the past ten years. Christian Robert is Professor of Statistics at Universite Paris-Dauphine, France. He has written over 150 papers on Bayesian Statistics and computational methods and is the author or co-author of seven books on those topics, including The Bayesian Choice (Springer, 2001), winner of the ISBA DeGroot Prize in 2004. He is a Fellow of the Institute of Mathematical Statistics, the Royal Statistical Society and the American Statistical Society. He has been co-editor of the Journal of the Royal Statistical Society, Series B, and in the editorial boards of the Journal of the American Statistical Society, the Annals of Statistics, Statistical Science, and Bayesian Analysis. He is also a recipient of an Erskine Fellowship from the University of Canterbury (NZ) in 2006 and a senior member of the Institut Universitaire de France (2010-2015)."
Now in its second edition, Text Analysis with R provides a practical introduction to computational text analysis using the open source programming language R. R is an extremely popular programming language, used throughout the sciences; due to its accessibility, R is now used increasingly in other research areas. In this volume, readers immediately begin working with text, and each chapter examines a new technique or process, allowing readers to obtain a broad exposure to core R procedures and a fundamental understanding of the possibilities of computational text analysis at both the micro and the macro scale. Each chapter builds on its predecessor as readers move from small scale "microanalysis" of single texts to large scale "macroanalysis" of text corpora, and each concludes with a set of practice exercises that reinforce and expand upon the chapter lessons. The book's focus is on making the technical palatable and making the technical useful and immediately gratifying. Text Analysis with R is written with students and scholars of literature in mind but will be applicable to other humanists and social scientists wishing to extend their methodological toolkit to include quantitative and computational approaches to the study of text. Computation provides access to information in text that readers simply cannot gather using traditional qualitative methods of close reading and human synthesis. This new edition features two new chapters: one that introduces dplyr and tidyr in the context of parsing and analyzing dramatic texts to extract speaker and receiver data, and one on sentiment analysis using the syuzhet package. It is also filled with updated material in every chapter to integrate new developments in the field, current practices in R style, and the use of more efficient algorithms.
This book discusses a variety of methods for outlier ensembles and organizes them by the specific principles with which accuracy improvements are achieved. In addition, it covers the techniques with which such methods can be made more effective. A formal classification of these methods is provided, and the circumstances in which they work well are examined. The authors cover how outlier ensembles relate (both theoretically and practically) to the ensemble techniques used commonly for other data mining problems like classification. The similarities and (subtle) differences in the ensemble techniques for the classification and outlier detection problems are explored. These subtle differences do impact the design of ensemble algorithms for the latter problem. This book can be used for courses in data mining and related curricula. Many illustrative examples and exercises are provided in order to facilitate classroom teaching. A familiarity is assumed to the outlier detection problem and also to generic problem of ensemble analysis in classification. This is because many of the ensemble methods discussed in this book are adaptations from their counterparts in the classification domain. Some techniques explained in this book, such as wagging, randomized feature weighting, and geometric subsampling, provide new insights that are not available elsewhere. Also included is an analysis of the performance of various types of base detectors and their relative effectiveness. The book is valuable for researchers and practitioners for leveraging ensemble methods into optimal algorithmic design.
This book illustrates the potential for computer simulation in the study of modern slavery and worker abuse, and by extension in all social issues. It lays out a philosophy of how agent-based modelling can be used in the social sciences. In addressing modern slavery, Chesney considers precarious work that is vulnerable to abuse, like sweat-shop labour and prostitution, and shows how agent modelling can be used to study, understand and fight abuse in these areas. He explores the philosophy, application and practice of agent modelling through the popular and free software NetLogo. This topical book is grounded in the technology needed to address the messy, chaotic, real world problems that humanity faces-in this case the serious problem of abuse at work-but equally in the social sciences which are needed to avoid the unintended consequences inherent to human responses. It includes a short but extensive NetLogo guide which readers can use to quickly learn this software and go on to develop complex models. This is an important book for students and researchers of computational social science and others interested in agent-based modelling.
Highly recommended by JASA, Technometrics, and other leading statistical journals, the first two editions of this bestseller showed how to easily perform complex linear mixed model (LMM) analyses via a variety of software programs. Linear Mixed Models: A Practical Guide Using Statistical Software, Third Edition continues to lead readers step-by-step through the process of fitting LMMs. The third edition provides a comprehensive update of the available tools for fitting linear mixed-effects models in the newest versions of SAS, SPSS, R, Stata, and HLM. All examples have been updated, with a focus on new tools for visualization of results and interpretation. New conceptual and theoretical developments in mixed-effects modeling have been included, and there is a new chapter on power analysis for mixed-effects models. Features:*Dedicates an entire chapter to the key theories underlying LMMs for clustered, longitudinal, and repeated measures data *Provides descriptions, explanations, and examples of software code necessary to fit LMMs in SAS, SPSS, R, Stata, and HLM *Contains detailed tables of estimates and results, allowing for easy comparisons across software procedures *Presents step-by-step analyses of real-world data sets that arise from a variety of research settings and study designs, including hypothesis testing, interpretation of results, and model diagnostics *Integrates software code in each chapter to compare the relative advantages and disadvantages of each package *Supplemented by a website with software code, datasets, additional documents, and updates Ideal for anyone who uses software for statistical modeling, this book eliminates the need to read multiple software-specific texts by covering the most popular software programs for fitting LMMs in one handy guide. The authors illustrate the models and methods through real-world examples that enable comparisons of model-fitting options and results across the software procedures.
|
You may like...
Elementary Theory of Groups and Group…
Paul Baginski, Benjamin Fine, …
Hardcover
R3,963
Discovery Miles 39 630
Understanding Group Behavior - Volume 1…
Erich H Witte, James H. Davis
Hardcover
R4,228
Discovery Miles 42 280
|