![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages
Statistical methods provide a logical, coherent framework in which data from experimental science can be analyzed. However, many researchers lack the statistical skills or resources that would allow them to explore their data to its full potential. Introduction to Data Analysis with R for Forensic Sciences minimizes theory and mathematics and focuses on the application and practice of statistics to provide researchers with the dexterity necessary to systematically analyze data discovered from the fruits of their research. Using traditional techniques and employing examples and tutorials with real data collected from experiments, this book presents the following critical information necessary for researchers:
Focusing on forensic examples but useful for anyone working in a laboratory, this volume enables researchers to get the most out of their experiments by allowing them to cogently analyze the data they have collected, saving valuable time and effort.
Design for Health: Applications of Human Factors delves into critical and emergent issues in healthcare and patient safety and how the field of human factors and ergonomics play a role in this domain. The book uses the Design for X (DfX) methodology to discuss a wide range of contexts, technologies, and population dependent criteria (X's) that must be considered in the design of a safe and usable healthcare ecosystem. Each chapter discusses a specific topic (e.g., mHealth, medical devices, emergency response, global health, etc.), reviews the concept, and presents a case study that demonstrates how human factors techniques and principles are utilized for the design, evaluation or improvements to specific tools, devices, and technologies (Section 1), healthcare systems and environments (Section 2), and applications to special populations (Section 3). The book represents an essential resource for researchers in academia as well as practitioners in medical device industries, consumer IT, and hospital settings. It covers a range of topics from medication reconciliation to self-care to the artificial heart.
This second edition is an intensively revised and updated version of the book MATLAB (R) and Design Recipes for Earth Sciences. It aims to introduce students to the typical course followed by a data analysis project in earth sciences. A project usually involves searching relevant literature, reviewing and ranking published books and journal articles, extracting relevant information from the literature in the form of text, data, or graphs, searching and processing the relevant original data using MATLAB, and compiling and presenting the results as posters, abstracts, and oral presentations using graphics design software. The text of this book includes numerous examples on the use of internet resources, on the visualization of data with MATLAB, and on preparing scientific presentations. As with the book MATLAB Recipes for Earth Sciences-4rd Edition (2015), which demonstrates the use of statistical and numerical methods on earth science data, this book uses state-of-the art software packages, including MATLAB and the Adobe Creative Suite, to process and present geoscientific information collected during the course of an earth science project. The book's supplementary electronic material (available online through the publisher's website) includes color versions of all figures, recipes with all the MATLAB commands featured in the book, the example data, exported MATLAB graphics, and screenshots of the most important steps involved in processing the graphics.
This book illustrates the potential for computer simulation in the study of modern slavery and worker abuse, and by extension in all social issues. It lays out a philosophy of how agent-based modelling can be used in the social sciences. In addressing modern slavery, Chesney considers precarious work that is vulnerable to abuse, like sweat-shop labour and prostitution, and shows how agent modelling can be used to study, understand and fight abuse in these areas. He explores the philosophy, application and practice of agent modelling through the popular and free software NetLogo. This topical book is grounded in the technology needed to address the messy, chaotic, real world problems that humanity faces-in this case the serious problem of abuse at work-but equally in the social sciences which are needed to avoid the unintended consequences inherent to human responses. It includes a short but extensive NetLogo guide which readers can use to quickly learn this software and go on to develop complex models. This is an important book for students and researchers of computational social science and others interested in agent-based modelling.
Interested in what SAP S/4HANA has to offer? Find out with this big-picture guide! Take a tour of SAP S/4HANA functionality for your key lines of business: finance, manufacturing, supply chain, sales, and more. Preview SAP S/4HANA's architecture, and discover your options for reporting, extensions, and adoption. With insights into the latest intelligent technologies, this is your all-in-one SAP S/4HANA starting point! In this book, you'll learn about: a. Functionality: See how SAP S/4HANA runs your business processes, from finance to supply chain management. Learn about new features for predictive accounting, manufacturing analytics, central procurement, and more. b. Key Capabilities: Make the most of your SAP S/4HANA system. Discover operational and enterprise-wide reporting, extensibility with SAP Cloud Platform and SAP API Business Hub, and new technologies for machine learning and AI. c. Deployment and Implementation: Set yourself up for a successful deployment. Evaluate your adoption options and consider your approach: new implementation, system conversion, or a selective data transition. Then take your first steps toward implementation with SAP Activate. Highlights include: 1) Finance 2) Manufacturing 3) Supply chain management 4) Sales, marketing, and commerce 5) Sourcing and procurement 6) Analytics and reporting 7) Industry solutions 8) Intelligent technologies 9) Architecture 10) Extensibility 11) Deployment
This book discusses enterprise hierarchies, which view a target system with varying degrees of abstraction. These requirement refinement hierarchies can be represented by goal models. It is important to verify that such hierarchies capture the same set of rationales and intentions and are in mutual agreement with the requirements of the system being designed. The book also explores how hierarchies manifest themselves in the real world by undertaking a data mining exercise and observing the interactions within an enterprise. The inherent sequence-agnostic property of goal models prevents requirement analysts from performing compliance checks in this phase as compliance rules are generally embedded with temporal information. The studies discussed here seek to extract finite state models corresponding to goal models with the help of model transformation. The i*ToNuSMV tool implements one such algorithm to perform model checking on i* models. In turn, the AFSR framework provides a new goal model nomenclature that associates semantics with individual goals. It also provides a reconciliation machinery that detects entailment or consistency conflicts within goal models and suggests corrective measures to resolve such conflicts. The authors also discuss how the goal maintenance problem can be mapped to the state-space search problem, and how A* search can be used to identify an optimal goal model configuration that is free from all conflicts. In conclusion, the authors discuss how the proposed research frameworks can be extended and applied in new research directions. The GRL2APK framework presents an initiative to develop mobile applications from goal models using reusable code component repositories.
This book brings together two major trends: data science and blockchains. It is one of the first books to systematically cover the analytics aspects of blockchains, with the goal of linking traditional data mining research communities with novel data sources. Data science and big data technologies can be considered cornerstones of the data-driven digital transformation of organizations and society. The concept of blockchain is predicted to enable and spark transformation on par with that associated with the invention of the Internet. Cryptocurrencies are the first successful use case of highly distributed blockchains, like the world wide web was to the Internet. The book takes the reader through basic data exploration topics, proceeding systematically, method by method, through supervised and unsupervised learning approaches and information visualization techniques, all the way to understanding the blockchain data from the network science perspective. Chapters introduce the cryptocurrency blockchain data model and methods to explore it using structured query language, association rules, clustering, classification, visualization, and network science. Each chapter introduces basic concepts, presents examples with real cryptocurrency blockchain data and offers exercises and questions for further discussion. Such an approach intends to serve as a good starting point for undergraduate and graduate students to learn data science topics using cryptocurrency blockchain examples. It is also aimed at researchers and analysts who already possess good analytical and data skills, but who do not yet have the specific knowledge to tackle analytic questions about blockchain transactions. The readers improve their knowledge about the essential data science techniques in order to turn mere transactional information into social, economic, and business insights.
This book discusses quantum theory as the theory of random (Brownian) motion of small particles (electrons etc.) under external forces. Implying that the Schroedinger equation is a complex-valued evolution equation and the Schroedinger function is a complex-valued evolution function, important applications are given. Readers will learn about new mathematical methods (theory of stochastic processes) in solving problems of quantum phenomena. Readers will also learn how to handle stochastic processes in analyzing physical phenomena.
This contributed book focuses on major aspects of statistical quality control, shares insights into important new developments in the field, and adapts established statistical quality control methods for use in e.g. big data, network analysis and medical applications. The content is divided into two parts, the first of which mainly addresses statistical process control, also known as statistical process monitoring. In turn, the second part explores selected topics in statistical quality control, including measurement uncertainty analysis and data quality. The peer-reviewed contributions gathered here were originally presented at the 13th International Workshop on Intelligent Statistical Quality Control, ISQC 2019, held in Hong Kong on August 12-14, 2019. Taken together, they bridge the gap between theory and practice, making the book of interest to both practitioners and researchers in the field of statistical quality control.
Notable author Katsuhiko Ogata presents the only new book available to discuss, "in sufficient detail, " the details of MATLAB(R) materials needed to solve many analysis and design problems associated with control systems. Complements a large number of examples with in-depth explanations, encouraging complete understanding of the MATLAB approach to solving problems. Distills the large volume of MATLAB information available to focus on those materials needed to study analysis and design problems of deterministic, continuous-time control systems. Covers conventional control systems such as transient response, root locus, frequency response analyses and designs; analysis and design problems associated with state space formulation of control systems; and useful MATLAB approaches to solve optimization problems. A useful self-study guide for practicing control engineers.
This Brief provides a roadmap for the R language and programming environment with signposts to further resources and documentation.
Adjustment Models in 3D Geomatics and Computational Geophysics: With MATLAB Examples, Volume Four introduces a complete package of theoretical and practical subjects in adjustment computations relating to Geomatics and geophysical applications, particularly photogrammetry, surveying, remote sensing, GIS, cartography, and geodesy. Supported by illustrating figures and solved examples with MATLAB codes, the book provides clear methods for processing 3D data for accurate and reliable results. Problems cover free net adjustment, adjustment with constraints, blunder detection, RANSAC, robust estimation, error propagation, 3D co-registration, image pose determination, and more.
In honor of professor and renowned statistician R. Dennis Cook, this festschrift explores his influential contributions to an array of statistical disciplines ranging from experimental design and population genetics, to statistical diagnostics and all areas of regression-related inference and analysis. Since the early 1990s, Prof. Cook has led the development of dimension reduction methodology in three distinct but related regression contexts: envelopes, sufficient dimension reduction (SDR), and regression graphics. In particular, he has made fundamental and pioneering contributions to SDR, inventing or co-inventing many popular dimension reduction methods, such as sliced average variance estimation, the minimum discrepancy approach, model-free variable selection, and sufficient dimension reduction subspaces. A prolific researcher and mentor, Prof. Cook is known for his ability to identify research problems in statistics that are both challenging and important, as well as his deep appreciation for the applied side of statistics. This collection of Prof. Cook's collaborators, colleagues, friends, and former students reflects the broad array of his contributions to the research and instructional arenas of statistics.
This book examines current topics and trends in strategic auditing, accounting and finance in digital transformation both from a theoretical and practical perspective. It covers areas such as internal control, corporate governance, enterprise risk management, sustainability and competition. The contributors of this volume emphasize how strategic approaches in this area help companies in achieving targets. The contributions illustrate how by providing good governance, reliable financial reporting, and accountability, businesses can win a competitive advantage. It further discusses how new technological developments like artificial intelligence (AI), cybersystems, network technologies, financial mobility and smart applications, will shape the future of accounting and auditing for firms.
This highly anticipated second edition features new chapters and sections, 225 new references, and comprehensive R software. In keeping with the previous edition, this book is about the art and science of data analysis and predictive modelling, which entails choosing and using multiple tools. Instead of presenting isolated techniques, this text emphasises problem solving strategies that address the many issues arising when developing multi-variable models using real data and not standard textbook examples. Regression Modelling Strategies presents full-scale case studies of non-trivial data-sets instead of over-simplified illustrations of each method. These case studies use freely available R functions that make the multiple imputation, model building, validation and interpretation tasks described in the book relatively easy to do. Most of the methods in this text apply to all regression models, but special emphasis is given to multiple regression using generalised least squares for longitudinal data, the binary logistic model, models for ordinal responses, parametric survival regression models and the Cox semi parametric survival model. A new emphasis is given to the robust analysis of continuous dependent variables using ordinal regression. As in the first edition, this text is intended for Masters' or PhD. level graduate students who have had a general introductory probability and statistics course and who are well versed in ordinary multiple regression and intermediate algebra. The book will also serve as a reference for data analysts and statistical methodologists, as it contains an up-to-date survey and bibliography of modern statistical modelling techniques.
SPSS syntax is the command language used by SPSS to carry out all of its commands and functions. In this book, Jacqueline Collier introduces the use of syntax to those who have not used it before, or who are taking their first steps in using syntax. Without requiring any knowledge of programming, the text outlines: - how to become familiar with the syntax commands; - how to create and manage the SPSS journal and syntax files; - and how to use them throughout the data entry, management and analysis process. Collier covers all aspects of data management from data entry through to data analysis, including managing the errors and the error messages created by SPSS. Syntax commands are clearly explained and the value of syntax is demonstrated through examples. This book also supports the use of SPSS syntax alongside the usual button and menu-driven graphical interface (GIF) using the two methods together, in a complementary way. The book is written in such a way as to enable you to pick and choose how much you rely on one method over the other, encouraging you to use them side-by-side, with a gradual increase in use of syntax as your knowledge, skills and confidence develop. This book is ideal for all those carrying out quantitative research in the health and social sciences who can benefit from SPSS syntax's capacity to save time, reduce errors and allow a data audit trail.
A comprehensive guide to automated statistical data cleaning The production of clean data is a complex and time-consuming process that requires both technical know-how and statistical expertise. Statistical Data Cleaning brings together a wide range of techniques for cleaning textual, numeric or categorical data. This book examines technical data cleaning methods relating to data representation and data structure. A prominent role is given to statistical data validation, data cleaning based on predefined restrictions, and data cleaning strategy. Key features: Focuses on the automation of data cleaning methods, including both theory and applications written in R. Enables the reader to design data cleaning processes for either one-off analytical purposes or for setting up production systems that clean data on a regular basis. Explores statistical techniques for solving issues such as incompleteness, contradictions and outliers, integration of data cleaning components and quality monitoring. Supported by an accompanying website featuring data and R code. This book enables data scientists and statistical analysts working with data to deepen their understanding of data cleaning as well as to upgrade their practical data cleaning skills. It can also be used as material for a course in data cleaning and analyses.
This book chronicles a 10-year introduction of blended learning into the delivery at a leading technological university, with a longstanding tradition of technology-enabled teaching and learning, and state-of-the-art infrastructure. Hence, both teachers and students were familiar with the idea of online courses. Despite this, the longitudinal experiment did not proceed as expected. Though few technical problems, it required behavioural changes from teachers and learners, thus unearthing a host of socio-technical issues, challenges, and conundrums. With the undercurrent of design ideals such as "tech for good", any industrial sector must examine whether digital platforms are credible substitutes or at best complementary. In this era of Industry 4.0, higher education, like any other industry, should not be about the creative destruction of what we value in universities, but their digital transformation. The book concludes with an agenda for large, repeatable Randomised Controlled Trials (RCTs) to validate digital platforms that could fulfil the aspirations of the key stakeholder groups - students, faculty, and regulators as well as delving into the role of Massive Open Online Courses (MOOCs) as surrogates for "fees-free" higher education and whether the design of such a HiEd 4.0 platform is even a credible proposition. Specifically, the book examines the data-driven evidence within a design-based research methodology to present outcomes of two alternative instructional designs evaluated - traditional lecturing and blended learning. Based on the research findings and statistical analysis, it concludes that the inexorable shift to online delivery of education must be guided by informed educational management and innovation.
Calling all SAP Business One users! Your must-have handbook is here. Now updated for SAP Business One 10.0, this bestselling guide has the expertise you need to keep your business running smoothly. Whether you're a new hire or a super user, get step-by-step instructions for your core processes, from purchasing and manufacturing to sales and financials. Master the tools and transactions that keep you focused on business outcomes and improved KPIs. This book is what you've been waiting for: the key to doing your job better in SAP Business One. Highlights Include:1) Administration2) Financials and banking3) Sales and purchasing4) Inventory management5) Resource management6) Production and MRP7) Human resources 8) Project management9) Reporting and analytics10) Mobile11) SAP HANA and SQL versions12) Cloud and on-premise systems
Inverse problems such as imaging or parameter identification deal with the recovery of unknown quantities from indirect observations, connected via a model describing the underlying context. While traditionally inverse problems are formulated and investigated in a static setting, we observe a significant increase of interest in time-dependence in a growing number of important applications over the last few years. Here, time-dependence affects a) the unknown function to be recovered and / or b) the observed data and / or c) the underlying process. Challenging applications in the field of imaging and parameter identification are techniques such as photoacoustic tomography, elastography, dynamic computerized or emission tomography, dynamic magnetic resonance imaging, super-resolution in image sequences and videos, health monitoring of elastic structures, optical flow problems or magnetic particle imaging to name only a few. Such problems demand for innovation concerning their mathematical description and analysis as well as computational approaches for their solution.
This open access book examines the implications of internal crowdsourcing (IC) in companies. Presenting an employee-oriented, cross-sector reference model for good IC practice, it discusses the core theoretical foundations, and offers guidelines for process-management and blueprints for the implementation of IC. Furthermore, it examines solutions for employee training and competence development based on crowdsourcing. As such, the book will appeal to scholars of management science, work studies, organizational and participation research and to readers interested in inclusive approaches for cooperative change management and the IT implications for IC platforms.
The investigation of the role of mechanical and mechano-chemical interactions in cellular processes and tissue development is a rapidly growing research field in the life sciences and in biomedical engineering. Quantitative understanding of this important area in the study of biological systems requires the development of adequate mathematical models for the simulation of the evolution of these systems in space and time. Since expertise in various fields is necessary, this calls for a multidisciplinary approach. This edited volume connects basic physical, biological, and physiological concepts to methods for the mathematical modeling of various materials by pursuing a multiscale approach, from subcellular to organ and system level. Written by active researchers, each chapter provides a detailed introduction to a given field, illustrates various approaches to creating models, and explores recent advances and future research perspectives. Topics covered include molecular dynamics simulations of lipid membranes, phenomenological continuum mechanics of tissue growth, and translational cardiovascular modeling. Modeling Biomaterials will be a valuable resource for both non-specialists and experienced researchers from various domains of science, such as applied mathematics, biophysics, computational physiology, and medicine.
This book shows how information theory, probability, statistics, mathematics and personal computers can be applied to the exploration of numbers and proportions in music. It brings the methods of scientific and quantitative thinking to questions like: What are the ways of encoding a message in music and how can we be sure of the correct decoding? How do claims of names hidden in the notes of a score stand up to scientific analysis? How many ways are there of obtaining proportions and are they due to chance? After thoroughly exploring the ways of encoding information in music, the ambiguities of numerical alphabets and the words to be found "hidden" in a score, the book presents a novel way of exploring the proportions in a composition with a purpose-built computer program and gives example results from the application of the techniques. These include information theory, combinatorics, probability, hypothesis testing, Monte Carlo simulation and Bayesian networks, presented in an easily understandable form including their development from ancient history through the life and times of J. S. Bach, making connections between science, philosophy, art, architecture, particle physics, calculating machines and artificial intelligence. For the practitioner the book points out the pitfalls of various psychological fallacies and biases and includes succinct points of guidance for anyone involved in this type of research. This book will be useful to anyone who intends to use a scientific approach to the humanities, particularly music, and will appeal to anyone who is interested in the intersection between the arts and science.With a foreword by Ruth Tatlow (Uppsala University), award winning author of Bach's Numbers: Compositional Proportion and Significance and Bach and the Riddle of the Number Alphabet."With this study Alan Shepherd opens a much-needed examination of the wide range of mathematical claims that have been made about J. S. Bach's music, offering both tools and methodological cautions with the potential to help clarify old problems." Daniel R. Melamed, Professor of Music in Musicology, Indiana University |
![]() ![]() You may like...
29th European Symposium on Computer…
Anton A Kiss, Edwin Zondervan, …
Hardcover
R11,775
Discovery Miles 117 750
Case Studies in Geospatial Applications…
Pravat Kumar Shit, Gouri Sankar Bhunia, …
Paperback
R3,351
Discovery Miles 33 510
Handbook of HydroInformatics - Volume…
Saeid Eslamian, Faezeh Eslamian
Paperback
R3,633
Discovery Miles 36 330
Database Systems - Design…
Carlos Coronel, Steven Morris
Paperback
|