![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer software packages > Other software packages
This innovative textbook presents material for a course on modern statistics that incorporates Python as a pedagogical and practical resource. Drawing on many years of teaching and conducting research in various applied and industrial settings, the authors have carefully tailored the text to provide an ideal balance of theory and practical applications. Numerous examples and case studies are incorporated throughout, and comprehensive Python applications are illustrated in detail. A custom Python package is available for download, allowing students to reproduce these examples and explore others. The first chapters of the text focus on analyzing variability, probability models, and distribution functions. Next, the authors introduce statistical inference and bootstrapping, and variability in several dimensions and regression models. The text then goes on to cover sampling for estimation of finite population quantities and time series analysis and prediction, concluding with two chapters on modern data analytic methods. Each chapter includes exercises, data sets, and applications to supplement learning. Modern Statistics: A Computer-Based Approach with Python is intended for a one- or two-semester advanced undergraduate or graduate course. Because of the foundational nature of the text, it can be combined with any program requiring data analysis in its curriculum, such as courses on data science, industrial statistics, physical and social sciences, and engineering. Researchers, practitioners, and data scientists will also find it to be a useful resource with the numerous applications and case studies that are included. A second, closely related textbook is titled Industrial Statistics: A Computer-Based Approach with Python. It covers topics such as statistical process control, including multivariate methods, the design of experiments, including computer experiments and reliability methods, including Bayesian reliability. These texts can be used independently or for consecutive courses. The mistat Python package can be accessed at https://gedeck.github.io/mistat-code-solutions/ModernStatistics/ "In this book on Modern Statistics, the last two chapters on modern analytic methods contain what is very popular at the moment, especially in Machine Learning, such as classifiers, clustering methods and text analytics. But I also appreciate the previous chapters since I believe that people using machine learning methods should be aware that they rely heavily on statistical ones. I very much appreciate the many worked out cases, based on the longstanding experience of the authors. They are very useful to better understand, and then apply, the methods presented in the book. The use of Python corresponds to the best programming experience nowadays. For all these reasons, I think the book has also a brilliant and impactful future and I commend the authors for that." Professor Fabrizio RuggeriResearch Director at the National Research Council, ItalyPresident of the International Society for Business and Industrial Statistics (ISBIS)Editor-in-Chief of Applied Stochastic Models in Business and Industry (ASMBI)
This is the first textbook that allows readers who may be unfamiliar with matrices to understand a variety of multivariate analysis procedures in matrix forms. By explaining which models underlie particular procedures and what objective function is optimized to fit the model to the data, it enables readers to rapidly comprehend multivariate data analysis. Arranged so that readers can intuitively grasp the purposes for which multivariate analysis procedures are used, the book also offers clear explanations of those purposes, with numerical examples preceding the mathematical descriptions. Supporting the modern matrix formulations by highlighting singular value decomposition among theorems in matrix algebra, this book is useful for undergraduate students who have already learned introductory statistics, as well as for graduate students and researchers who are not familiar with matrix-intensive formulations of multivariate data analysis. The book begins by explaining fundamental matrix operations and the matrix expressions of elementary statistics. Then, it offers an introduction to popular multivariate procedures, with each chapter featuring increasing advanced levels of matrix algebra. Further the book includes in six chapters on advanced procedures, covering advanced matrix operations and recently proposed multivariate procedures, such as sparse estimation, together with a clear explication of the differences between principal components and factor analyses solutions. In a nutshell, this book allows readers to gain an understanding of the latest developments in multivariate data science.
Studies of evolution at the molecular level have experienced phenomenal growth in the last few decades, due to rapid accumulation of genetic sequence data, improved computer hardware and software, and the development of sophisticated analytical methods. The flood of genomic data has generated an acute need for powerful statistical methods and efficient computational algorithms to enable their effective analysis and interpretation. Molecular Evolution: a statistical approach presents and explains modern statistical methods and computational algorithms for the comparative analysis of genetic sequence data in the fields of molecular evolution, molecular phylogenetics, statistical phylogeography, and comparative genomics. Written by an expert in the field, the book emphasizes conceptual understanding rather than mathematical proofs. The text is enlivened with numerous examples of real data analysis and numerical calculations to illustrate the theory, in addition to the working problems at the end of each chapter. The coverage of maximum likelihood and Bayesian methods are in particular up-to-date, comprehensive, and authoritative. This advanced textbook is aimed at graduate level students and professional researchers (both empiricists and theoreticians) in the fields of bioinformatics and computational biology, statistical genomics, evolutionary biology, molecular systematics, and population genetics. It will also be of relevance and use to a wider audience of applied statisticians, mathematicians, and computer scientists working in computational biology.
This book presents theoretical modeling and numerical simulations applied to drive several applications towards Industrial Revolution 4.0 (IR 4.0). The topics discussed range from theoretical parts to extensive simulations involving many efficient algorithms as well as various statistical techniques. This book is suitable for postgraduate students, researchers as well as other scientists who are working in mathematics, statistics and numerical modeling and simulation.
This book offers a systematic and rigorous treatment of continuous-time Markov decision processes, covering both theory and possible applications to queueing systems, epidemiology, finance, and other fields. Unlike most books on the subject, much attention is paid to problems with functional constraints and the realizability of strategies. Three major methods of investigations are presented, based on dynamic programming, linear programming, and reduction to discrete-time problems. Although the main focus is on models with total (discounted or undiscounted) cost criteria, models with average cost criteria and with impulsive controls are also discussed in depth. The book is self-contained. A separate chapter is devoted to Markov pure jump processes and the appendices collect the requisite background on real analysis and applied probability. All the statements in the main text are proved in detail. Researchers and graduate students in applied probability, operational research, statistics and engineering will find this monograph interesting, useful and valuable.
This book chronicles a 10-year introduction of blended learning into the delivery at a leading technological university, with a longstanding tradition of technology-enabled teaching and learning, and state-of-the-art infrastructure. Hence, both teachers and students were familiar with the idea of online courses. Despite this, the longitudinal experiment did not proceed as expected. Though few technical problems, it required behavioural changes from teachers and learners, thus unearthing a host of socio-technical issues, challenges, and conundrums. With the undercurrent of design ideals such as "tech for good", any industrial sector must examine whether digital platforms are credible substitutes or at best complementary. In this era of Industry 4.0, higher education, like any other industry, should not be about the creative destruction of what we value in universities, but their digital transformation. The book concludes with an agenda for large, repeatable Randomised Controlled Trials (RCTs) to validate digital platforms that could fulfil the aspirations of the key stakeholder groups - students, faculty, and regulators as well as delving into the role of Massive Open Online Courses (MOOCs) as surrogates for "fees-free" higher education and whether the design of such a HiEd 4.0 platform is even a credible proposition. Specifically, the book examines the data-driven evidence within a design-based research methodology to present outcomes of two alternative instructional designs evaluated - traditional lecturing and blended learning. Based on the research findings and statistical analysis, it concludes that the inexorable shift to online delivery of education must be guided by informed educational management and innovation.
Optimal State Estimation for Process Monitoring, Fault Diagnosis and Control presents various mechanistic model based state estimators and data-driven model based state estimators with a special emphasis on their development and applications to process monitoring, fault diagnosis and control. The design and analysis of different state estimators are highlighted with a number of applications and case studies concerning to various real chemical and biochemical processes. The book starts with the introduction of basic concepts, extending to classical methods and successively leading to advances in this field. Design and implementation of various classical and advanced state estimation methods to solve a wide variety of problems makes this book immensely useful for the audience working in different disciplines in academics, research and industry in areas concerning to process monitoring, fault diagnosis, control and related disciplines.
Inverse problems such as imaging or parameter identification deal with the recovery of unknown quantities from indirect observations, connected via a model describing the underlying context. While traditionally inverse problems are formulated and investigated in a static setting, we observe a significant increase of interest in time-dependence in a growing number of important applications over the last few years. Here, time-dependence affects a) the unknown function to be recovered and / or b) the observed data and / or c) the underlying process. Challenging applications in the field of imaging and parameter identification are techniques such as photoacoustic tomography, elastography, dynamic computerized or emission tomography, dynamic magnetic resonance imaging, super-resolution in image sequences and videos, health monitoring of elastic structures, optical flow problems or magnetic particle imaging to name only a few. Such problems demand for innovation concerning their mathematical description and analysis as well as computational approaches for their solution.
This book provides a concise point of reference for the most commonly used regression methods. It begins with linear and nonlinear regression for normally distributed data, logistic regression for binomially distributed data, and Poisson regression and negative-binomial regression for count data. It then progresses to these regression models that work with longitudinal and multi-level data structures. The volume is designed to guide the transition from classical to more advanced regression modeling, as well as to contribute to the rapid development of statistics and data science. With data and computing programs available to facilitate readers' learning experience, Statistical Regression Modeling promotes the applications of R in linear, nonlinear, longitudinal and multi-level regression. All included datasets, as well as the associated R program in packages nlme and lme4 for multi-level regression, are detailed in Appendix A. This book will be valuable in graduate courses on applied regression, as well as for practitioners and researchers in the fields of data science, statistical analytics, public health, and related fields.
Quantitative Analysis and Modeling of Earth and Environmental Data: Space-Time and Spacetime Data Considerations introduces the notion of chronotopologic data analysis that offers a systematic, quantitative analysis of multi-sourced data and provides information about the spatial distribution and temporal dynamics of natural attributes (physical, biological, health, social). It includes models and techniques for handling data that may vary by space and/or time, and aims to improve understanding of the physical laws of change underlying the available numerical datasets, while taking into consideration the in-situ uncertainties and relevant measurement errors (conceptual, technical, computational). It considers the synthesis of scientific theory-based methods (stochastic modeling, modern geostatistics) and data-driven techniques (machine learning, artificial neural networks) so that their individual strengths are combined by acting symbiotically and complementing each other. The notions and methods presented in Quantitative Analysis and Modeling of Earth and Environmental Data: Space-Time and Spacetime Data Considerations cover a wide range of data in various forms and sources, including hard measurements, soft observations, secondary information and auxiliary variables (ground-level measurements, satellite observations, scientific instruments and records, protocols and surveys, empirical models and charts). Including real-world practical applications as well as practice exercises, this book is a comprehensive step-by-step tutorial of theory-based and data-driven techniques that will help students and researchers master data analysis and modeling in earth and environmental sciences (including environmental health and human exposure applications).
This book provides an accessible introduction and practical guidelines to apply asymmetric multidimensional scaling, cluster analysis, and related methods to asymmetric one-mode two-way and three-way asymmetric data. A major objective of this book is to present to applied researchers a set of methods and algorithms for graphical representation and clustering of asymmetric relationships. Data frequently concern measurements of asymmetric relationships between pairs of objects from a given set (e.g., subjects, variables, attributes,...), collected in one or more matrices. Examples abound in many different fields such as psychology, sociology, marketing research, and linguistics and more recently several applications have appeared in technological areas including cybernetics, air traffic control, robotics, and network analysis. The capabilities of the presented algorithms are illustrated by carefully chosen examples and supported by extensive data analyses. A review of the specialized statistical software available for the applications is also provided. This monograph is highly recommended to readers who need a complete and up-to-date reference on methods for asymmetric proximity data analysis.
Most transformations and large-scale change programs fail, but in a rapidly changing world change is becoming more and more critical for survival. The HERO Transformation Playbook is your step-by-step playbook of EXACTLY how to deliver successful transformations and large-scale change programs with the best chance of success using the HERO Transformation Framework: a clear method to help you design transformation for maximum enterprise value creation and then deliver the outcome in a repeatable fashion. We built our framework through trial and error, learning from our mistakes and successes and solving common issues we came across and pitfalls that we have seen time and again. We then spent many years honing the framework, removing the fluff, distilling the concepts until it contained everything you need to succeed in the challenging world of change. In this book we teach you everything we've learned - including all of the roles, processes, meetings, governance, and templates for you to follow and apply to your transformation today - so that you can crack the code of change and lead successful transformations on your own. The more successful transformations that are delivered, the better the world will be for everyone!
The Definitive Guide to Interwoven TeamSite is the first comprehensive book on this enterprise-level content management system, guiding the reader through the product Architecture, key features, and a detailed implementation of the product by way of a case-study involving a hypothetical financial services firm. Along the way, the authors share key development and deployment principles gained throughout their several years of working in enterprise CMS environments. Divided into five parts, the material is presented in much the same way one might expect when considering a large-scale CMS project: Introduction, Inception, Elaboration, Construction, and Transition. Each part introduces the concepts and TeamSite features readers will need to understand in order to carry out that aspect of the project. Complete with a working implementation and numerous visual guides, the authors painstakingly layout the project process. Finally an Epilogue which discusses future product releases.
Confidently shepherd your organization's implementation of Microsoft Dynamics 365 to a successful conclusion In Mastering Microsoft Dynamics 365 Implementations, accomplished executive, project manager, and author Eric Newell delivers a holistic, step-by-step reference to implementing Microsoft's cloud-based ERP and CRM business applications. You'll find the detailed and concrete instructions you need to take your implementation project all the way to the finish line, on-time, and on-budget. You'll learn: The precise steps to take, in the correct order, to bring your Dynamics 365 implementation to life What to do before you begin the project, including identifying stakeholders and building your business case How to deal with a change management throughout the lifecycle of your project How to manage conference room pilots (CRPs) and what to expect during the sessions Perfect for CIOs, technology VPs, CFOs, Operations leaders, application directors, business analysts, ERP/CRM specialists, and project managers, Mastering Microsoft Dynamics 365 Implementations is an indispensable and practical reference for guiding your real-world Dynamics 365 implementation from planning to completion.
This adapted version of CBSD for the Fundamentals Series explores the characteristics of IT-driven business services, their requirements and how to gather the right requirements to improve the service lifecycle throughout design, development and maintenance until decommissioning. By understanding IT-driven business services and anchoring them in a service design statement (SDS), you will be able to accelerate the translation of the needs of the business to the delivery of IT-intensive business services. Product overview CBSD supports portfolio, programme and project management by identifying key questions and structuring the creative process of designing services. Insight into the CBSD approach to deriving an SDS is therefore a practical and powerful tool to help you: - Promote a coherent design so that fundamental issues and requirements of needs are mapped, based on different perspectives between demand and supply; - Gain insight into the dynamics between stakeholders within an enterprise; - Reflect on and formulate a practical and realistic roadmap; and - Guide the development, build, programme management and maintenance of IT-driven business services. CBSD complements existing frameworks such as TOGAF(R), IT4IT, BiSL(R) Next and ITIL(R) by focusing on business architecture, a subject rarely discussed before designing an IT-intensive, complex business service. Who should read this book This book is intended for anyone responsible for designing and implementing IT-driven services or involved in their operation. This includes: - Internal and external service providers, such as service managers, contract managers, bid managers, lead architects and requirement analysts; - Business, financial, sales, marketing and operations managers who are responsible for output and outcome; - Sales and product managers who need to present and improve service offerings; - Developers who need to develop new and improved services; - Contract managers and those responsible for purchasing; and - Consultants, strategists, business managers, business process owners, business architects, business information managers, chief information officers, information systems owners and information architects. Collaborative Business Design: The Fundamentals is part of the Fundamentals Series. Authors Brian Johnson has published more than 30 books, including a dozen official titles in the IT Infrastructure Library (ITIL), all of which are used worldwide. He designed and led the programme for ITIL version 2. He has fulfilled many roles during his career, including vice president, chief architect, senior director and executive consultant. One of his current roles is chief architect at the ASL BiSL Foundation, which provides guidance on business information management to a wide range of public and private-sector businesses in the Benelux region. Brian is chief architect for the redesign of all guidance and is the author of new strategic publications. Leon-Paul de Rouw studied technical management and organisation sociology. He worked for several years as a consultant and researcher in the private sector. Since 2003, he has been a programme manager with the central government in the Netherlands. He is responsible for all types of projects and programmes that focus on business enabled by IT.
AI, Edge, and IoT Smart Agriculture integrates applications of IoT, edge computing, and data analytics for sustainable agricultural development and introduces Edge of Thing-based data analytics and IoT for predictability of crop, soil, and plant disease occurrence for improved sustainability and increased profitability. The book also addresses precision irrigation, precision horticulture, greenhouse IoT, livestock monitoring, IoT ecosystem for agriculture, mobile robot for precision agriculture, energy monitoring, storage management, and smart farming. The book provides an overarching focus on sustainable environment and sustainable economic development through smart and e-agriculture. Providing a medium for the exchange of expertise and inspiration, contributions from both smart agriculture and data mining researchers around the world provide foundational insights. The book provides practical application opportunities for the resolution of real-world problems, including contributions from the data mining, data analytics, Edge of Things, and cloud research communities working in the farming production sector. The book offers broad coverage of the concepts, themes, and instruments of this important and evolving area of IOT-based agriculture, Edge of Things and cloud-based farming, Greenhouse IOT, mobile agriculture, sustainable agriculture, and big data analytics in agriculture toward smart farming.
The nonequilibrium behavior of nanoscopic and biological systems, which are typically strongly fluctuating, is a major focus of current research. Lately, much progress has been made in understanding such systems from a thermodynamic perspective. However, new theoretical challenges emerge when the fluctuating system is additionally subject to time delay, e.g. due to the presence of feedback loops. This thesis advances this young and vibrant research field in several directions. The first main contribution concerns the probabilistic description of time-delayed systems; e.g. by introducing a versatile approximation scheme for nonlinear delay systems. Second, it reveals that delay can induce intriguing thermodynamic properties such as anomalous (reversed) heat flow. More generally, the thesis shows how to treat the thermodynamics of non-Markovian systems by introducing auxiliary variables. It turns out that delayed feedback is inextricably linked to nonreciprocal coupling, information flow, and to net energy input on the fluctuating level.
This book provides a reference for people working in the design, development, and manufacturing of medical devices. ​While there are no statistical methods specifically intended for medical devices, there are methods that are commonly applied to various problems in the design, manufacturing, and quality control of medical devices. The aim of this book is not to turn everyone working in the medical device industries into mathematical statisticians; rather, the goal is to provide some help in thinking statistically, and knowing where to go to answer some fundamental questions, such as justifying a method used to qualify/validate equipment, or what information is necessary to support the choice of sample sizes. While, there are no statistical methods specifically designed for analysis of medical device data, there are some methods that seem to appear regularly in relation to medical devices. For example, the assessment of receiver operating characteristic curves is fundamental to development of diagnostic tests, and accelerated life testing is often critical for assessing the shelf life of medical device products. Another example is sensitivity/specificity computations are necessary for in-vitro diagnostics, and Taguchi methods can be very useful for designing devices. Even notions of equivalence and noninferiority have different interpretations in the medical device field compared to pharmacokinetics. It contains topics such as dynamic modeling, machine learning methods, equivalence testing, and experimental design, for example. This book is for those with no statistical experience, as well as those with statistical knowledgeable—with the hope to provide some insight into what methods are likely to help provide rationale for choices relating to data gathering and analysis activities for medical devices.
This book introduces readers to various signal processing models that have been used in analyzing periodic data, and discusses the statistical and computational methods involved. Signal processing can broadly be considered to be the recovery of information from physical observations. The received signals are usually disturbed by thermal, electrical, atmospheric or intentional interferences, and due to their random nature, statistical techniques play an important role in their analysis. Statistics is also used in the formulation of appropriate models to describe the behavior of systems, the development of appropriate techniques for estimation of model parameters and the assessment of the model performances. Analyzing different real-world data sets to illustrate how different models can be used in practice, and highlighting open problems for future research, the book is a valuable resource for senior undergraduate and graduate students specializing in mathematics or statistics.
Uncertainties in Numerical Weather Prediction is a comprehensive work on the most current understandings of uncertainties and predictability in numerical simulations of the atmosphere. It provides general knowledge on all aspects of uncertainties in the weather prediction models in a single, easy to use reference. The book illustrates particular uncertainties in observations and data assimilation, as well as the errors associated with numerical integration methods. Stochastic methods in parameterization of subgrid processes are also assessed, as are uncertainties associated with surface-atmosphere exchange, orographic flows and processes in the atmospheric boundary layer. Through a better understanding of the uncertainties to watch for, readers will be able to produce more precise and accurate forecasts. This is an essential work for anyone who wants to improve the accuracy of weather and climate forecasting and interested parties developing tools to enhance the quality of such forecasts.
This book shows how information theory, probability, statistics, mathematics and personal computers can be applied to the exploration of numbers and proportions in music. It brings the methods of scientific and quantitative thinking to questions like: What are the ways of encoding a message in music and how can we be sure of the correct decoding? How do claims of names hidden in the notes of a score stand up to scientific analysis? How many ways are there of obtaining proportions and are they due to chance? After thoroughly exploring the ways of encoding information in music, the ambiguities of numerical alphabets and the words to be found "hidden" in a score, the book presents a novel way of exploring the proportions in a composition with a purpose-built computer program and gives example results from the application of the techniques. These include information theory, combinatorics, probability, hypothesis testing, Monte Carlo simulation and Bayesian networks, presented in an easily understandable form including their development from ancient history through the life and times of J. S. Bach, making connections between science, philosophy, art, architecture, particle physics, calculating machines and artificial intelligence. For the practitioner the book points out the pitfalls of various psychological fallacies and biases and includes succinct points of guidance for anyone involved in this type of research. This book will be useful to anyone who intends to use a scientific approach to the humanities, particularly music, and will appeal to anyone who is interested in the intersection between the arts and science.With a foreword by Ruth Tatlow (Uppsala University), award winning author of Bach's Numbers: Compositional Proportion and Significance and Bach and the Riddle of the Number Alphabet."With this study Alan Shepherd opens a much-needed examination of the wide range of mathematical claims that have been made about J. S. Bach's music, offering both tools and methodological cautions with the potential to help clarify old problems." Daniel R. Melamed, Professor of Music in Musicology, Indiana University
The rapid development of artificial intelligence technology in medical data analysis has led to the concept of radiomics. This book introduces the essential and latest technologies in radiomics, such as imaging segmentation, quantitative imaging feature extraction, and machine learning methods for model construction and performance evaluation, providing invaluable guidance for the researcher entering the field. It fully describes three key aspects of radiomic clinical practice: precision diagnosis, the therapeutic effect, and prognostic evaluation, which make radiomics a powerful tool in the clinical setting. This book is a very useful resource for scientists and computer engineers in machine learning and medical image analysis, scientists focusing on antineoplastic drugs, and radiologists, pathologists, oncologists, as well as surgeons wanting to understand radiomics and its potential in clinical practice.
An Introduction to Stata for Health Researchers, Fifth Edition updates this classic book that has become a standard reference for health researchers. As with previous editions, readers will learn to work effectively in Stata to perform data management, compute descriptive statistics, create meaningful graphs, fit regression models, and perform survival analysis. The fifth edition adds examples of performing power, precision, and sample-size analysis; working with Unicode characters; managing data with ICD-9 and ICD-10 codes; and creating customized tables. With many worked examples and downloadable datasets, this text is the ideal resource for hands-on learning, whether for students in a statistics course or for researchers in fields such as epidemiology, biostatistics, and public health who are learning to use Stata's tools for health research.
30th European Symposium on Computer Aided Chemical Engineering, Volume 47 contains the papers presented at the 30th European Symposium of Computer Aided Process Engineering (ESCAPE) event held in Milan, Italy, May 24-27, 2020. It is a valuable resource for chemical engineers, chemical process engineers, researchers in industry and academia, students, and consultants for chemical industries. |
You may like...
Introduction to Data Systems - Building…
Thomas Bressoud, David White
Hardcover
R2,267
Discovery Miles 22 670
Functionalization of 2D Materials and…
Waleed A. El-Said, Nabil Ahmed Abdel Ghany
Paperback
R4,674
Discovery Miles 46 740
Nanotechnology-Based E-Noses…
Ram K. Gupta, Tuan Anh Nguyen, …
Paperback
R6,056
Discovery Miles 60 560
Encyclopedia of Food Chemistry
Peter Varelis, Laurence Melton, …
Hardcover
R37,795
Discovery Miles 377 950
The Exorcist: The Version You've Never…
Ellen Burstyn, Linda Blair, …
Blu-ray disc
(1)
Optimization and Data Analysis in…
Panos M. Pardalos, Thomas F. Coleman, …
Hardcover
R1,413
Discovery Miles 14 130
|