![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages
* Targests readers with a background in programming, interested in an introduction/refresher in statistical hypothesis testing * Uses Python throughout * Provides the reader with the opportunity of using the book whenever needed rather than following a sequential path.
Using the same accessible, hands-on approach as its best-selling predecessor, the Handbook of Univariate and Multivariate Data Analysis with IBM SPSS, Second Edition explains how to apply statistical tests to experimental findings, identify the assumptions underlying the tests, and interpret the findings. This second edition now covers more topics and has been updated with the SPSS statistical package for Windows. New to the Second Edition Three new chapters on multiple discriminant analysis, logistic regression, and canonical correlation New section on how to deal with missing data Coverage of tests of assumptions, such as linearity, outliers, normality, homogeneity of variance-covariance matrices, and multicollinearity Discussions of the calculation of Type I error and the procedure for testing statistical significance between two correlation coefficients obtained from two samples Expanded coverage of factor analysis, path analysis (test of the mediation hypothesis), and structural equation modeling Suitable for both newcomers and seasoned researchers in the social sciences, the handbook offers a clear guide to selecting the right statistical test, executing a wide range of univariate and multivariate statistical tests via the Windows and syntax methods, and interpreting the output results. The SPSS syntax files used for executing the statistical tests can be found in the appendix. Data sets employed in the examples are available on the book's CRC Press web page.
Essential MATLAB for Engineers and Scientists, Eighth Edition provides a concise and balanced overview of MATLAB's functionality, covering both fundamentals and applications. The essentials are illustrated throughout, featuring complete coverage of the software's windows and menus. Program design and algorithm development are presented, along with many examples from a wide range of familiar scientific and engineering areas. This edition has been updated to include the latest MATLAB versions through 2021a. This is an ideal book for a first course on MATLAB, but is also ideal for an engineering problem-solving course using MATLAB.
The main focus of this book is on presenting advances in fuzzy statistics, and on proposing a methodology for testing hypotheses in the fuzzy environment based on the estimation of fuzzy confidence intervals, a context in which not only the data but also the hypotheses are considered to be fuzzy. The proposed method for estimating these intervals is based on the likelihood method and employs the bootstrap technique. A new metric generalizing the signed distance measure is also developed. In turn, the book presents two conceptually diverse applications in which defended intervals play a role: one is a novel methodology for evaluating linguistic questionnaires developed at the global and individual levels; the other is an extension of the multi-ways analysis of variance to the space of fuzzy sets. To illustrate these approaches, the book presents several empirical and simulation-based studies with synthetic and real data sets. In closing, it presents a coherent R package called "FuzzySTs" which covers all the previously mentioned concepts with full documentation and selected use cases. Given its scope, the book will be of interest to all researchers whose work involves advanced fuzzy statistical methods.
The Second Bayesian Young Statisticians Meeting (BAYSM 2014) and the research presented here facilitate connections among researchers using Bayesian Statistics by providing a forum for the development and exchange of ideas. WU Vienna University of Business and Economics hosted BAYSM 2014 from September 18th to the 19th. The guidance of renowned plenary lecturers and senior discussants is a critical part of the meeting and this volume, which follows publication of contributions from BAYSM 2013. The meeting's scientific program reflected the variety of fields in which Bayesian methods are currently employed or could be introduced in the future. Three brilliant keynote lectures by Chris Holmes (University of Oxford), Christian Robert (Universite Paris-Dauphine), and Mike West (Duke University), were complemented by 24 plenary talks covering the major topics Dynamic Models, Applications, Bayesian Nonparametrics, Biostatistics, Bayesian Methods in Economics, and Models and Methods, as well as a lively poster session with 30 contributions. Selected contributions have been drawn from the conference for this book. All contributions in this volume are peer-reviewed and share original research in Bayesian computation, application, and theory.
This book is a highly accessible guide to being a project manager (PM), particularly a project manager working within an IT field. The role is set out with reference to required skills, competencies and responsibilities. Tools, methods and techniques for project managers are covered, including Agile approaches; risk, issue and change management processes; best practices for managing stakeholders and financial management.
With the advancement of statistical methodology inextricably linked to the use of computers, new methodological ideas must be translated into usable code and then numerically evaluated relative to competing procedures. In response to this, Statistical Computing in C++ and R concentrates on the writing of code rather than the development and study of numerical algorithms per se. The book discusses code development in C++ and R and the use of these symbiotic languages in unison. It emphasizes that each offers distinct features that, when used in tandem, can take code writing beyond what can be obtained from either language alone. The text begins with some basics of object-oriented languages, followed by a "boot-camp" on the use of C++ and R. The authors then discuss code development for the solution of specific computational problems that are relevant to statistics including optimization, numerical linear algebra, and random number generation. Later chapters introduce abstract data structures (ADTs) and parallel computing concepts. The appendices cover R and UNIX Shell programming. Features Includes numerous student exercises ranging from elementary to challenging Integrates both C++ and R for the solution of statistical computing problems Uses C++ code in R and R functions in C++ programs Provides downloadable programs, available from the authors' website The translation of a mathematical problem into its computational analog (or analogs) is a skill that must be learned, like any other, by actively solving relevant problems. The text reveals the basic principles of algorithmic thinking essential to the modern statistician as well as the fundamental skill of communicating with a computer through the use of the computer languages C++ and R. The book lays the foundation for original code development in a research environment.
Compositional Data Analysis in Practice is a user-oriented practical guide to the analysis of data with the property of a constant sum, for example percentages adding up to 100%. Compositional data can give misleading results if regular statistical methods are applied, and are best analysed by first transforming them to logarithms of ratios. This book explains how this transformation affects the analysis, results and interpretation of this very special type of data. All aspects of compositional data analysis are considered: visualization, modelling, dimension-reduction, clustering and variable selection, with many examples in the fields of food science, archaeology, sociology and biochemistry, and a final chapter containing a complete case study using fatty acid compositions in ecology. The applicability of these methods extends to other fields such as linguistics, geochemistry, marketing, economics and finance. R Software The following repository contains data files and R scripts from the book https://github.com/michaelgreenacre/CODAinPractice. The R package easyCODA, which accompanies this book, is available on CRAN -- note that you should have version 0.25 or higher. The latest version of the package will always be available on R-Forge and can be installed from R with this instruction: install.packages("easyCODA", repos="http://R-Forge.R-project.org").
A Criminologist's Guide to R: Crime by the Numbers introduces the programming language R and covers the necessary skills to conduct quantitative research in criminology. By the end of this book, a person without any prior programming experience can take raw crime data, be able to clean it, visualize the data, present it using R Markdown, and change it to a format ready for analysis. A Criminologist's Guide to R focuses on skills specifically for criminology such as spatial joins, mapping, and scraping data from PDFs, however any social scientist looking for an introduction to R for data analysis will find this useful. Key Features: Introduction to RStudio including how to change user preference settings. Basic data exploration and cleaning - subsetting, loading data, regular expressions, aggregating data. Graphing with ggplot2. How to make maps (hotspot maps, choropleth maps, interactive maps). Webscraping and PDF scraping. Project management - how to prepare for a project, how to decide which projects to do, best ways to collaborate with people, how to store your code (using git), and how to test your code.
There is no shortage of incentives to study and reduce poverty in our societies. Poverty is studied in economics and political sciences, and population surveys are an important source of information about it. The design and analysis of such surveys is principally a statistical subject matter and the computer is essential for their data compilation and processing. Focusing on The European Union Statistics on Income and Living Conditions (EU-SILC), a program of annual national surveys which collect data related to poverty and social exclusion, Statistical Studies of Income, Poverty and Inequality in Europe: Computing and Graphics in R presents a set of statistical analyses pertinent to the general goals of EU-SILC. The contents of the volume are biased toward computing and statistics, with reduced attention to economics, political and other social sciences. The emphasis is on methods and procedures as opposed to results, because the data from annual surveys made available since publication and in the near future will degrade the novelty of the data used and the results derived in this volume. The aim of this volume is not to propose specific methods of analysis, but to open up the analytical agenda and address the aspects of the key definitions in the subject of poverty assessment that entail nontrivial elements of arbitrariness. The presented methods do not exhaust the range of analyses suitable for EU-SILC, but will stimulate the search for new methods and adaptation of established methods that cater to the identified purposes.
A "how to" guide for applying statistical methods to biomarker data analysis Presenting a solid foundation for the statistical methods that are used to analyze biomarker data, Analysis of Biomarker Data: A Practical Guide features preferred techniques for biomarker validation. The authors provide descriptions of select elementary statistical methods that are traditionally used to analyze biomarker data with a focus on the proper application of each method, including necessary assumptions, software recommendations, and proper interpretation of computer output. In addition, the book discusses frequently encountered challenges in analyzing biomarker data and how to deal with them, methods for the quality assessment of biomarkers, and biomarker study designs. Covering a broad range of statistical methods that have been used to analyze biomarker data in published research studies, Analysis of Biomarker Data: A Practical Guide also features: A greater emphasis on the application of methods as opposed to the underlying statistical and mathematical theory The use of SAS(R), R, and other software throughout to illustrate the presented calculations for each example Numerous exercises based on real-world data as well as solutions to the problems to aid in reader comprehension The principles of good research study design and the methods for assessing the quality of a newly proposed biomarker A companion website that includes a software appendix with multiple types of software and complete data sets from the book's examples Analysis of Biomarker Data: A Practical Guide is an ideal upper-undergraduate and graduate-level textbook for courses in the biological or environmental sciences. An excellent reference for statisticians who routinely analyze and interpret biomarker data, the book is also useful for researchers who wish to perform their own analyses of biomarker data, such as toxicologists, pharmacologists, epidemiologists, environmental and clinical laboratory scientists, and other professionals in the health and environmental sciences.
Enterprise Level Security 2: Advanced Topics in an Uncertain World follows on from the authors' first book on Enterprise Level Security (ELS), which covered the basic concepts of ELS and the discoveries made during the first eight years of its development. This book follows on from this to give a discussion of advanced topics and solutions, derived from 16 years of research, pilots, and operational trials in putting an enterprise system together. The chapters cover specific advanced topics derived from painful mistakes and numerous revisions of processes. This book covers many of the topics omitted from the first book including multi-factor authentication, cloud key management, enterprise change management, entity veracity, homomorphic computing, device management, mobile ad hoc, big data, mediation, and several other topics. The ELS model of enterprise security is endorsed by the Secretary of the Air Force for Air Force computing systems and is a candidate for DoD systems under the Joint Information Environment Program. The book is intended for enterprise IT architecture developers, application developers, and IT security professionals. This is a unique approach to end-to-end security and fills a niche in the market.
GitOps and Kubernetes introduces a radical idea-managing your infrastructure with the same Git pull requests you use to manage your codebase. In this in-depth tutorial, you'll learn to operate infrastructures based on powerful-but-complex technologies with the same Git version control tools most developers use daily. GitOps and Kubernetes is half reference, half practical tutorial for operating Kubernetes the GitOps way. Through fast-paced chapters, you'll unlock the benefits of GitOps for flexible configuration management, monitoring, robustness, multi-environment support, and discover tricks and tips for managing secrets in the unique GitOps fashion. Key Features * Multiple-environments management with branching, namespace, and configuration * Access Control with Git, Kubernetes, and Pipeline * Using Kubernetes with Argo CD, JenkinsX, and Flux * Multi-step deployment strategies like Blue-Green, Canary in a declarative GitOps model For developers familiar with Continuous Delivery principles and the basics of Git and Kubernetes. About the technology The tools to monitor and manage software delivery and deployment can be complex to set up and intimidating to learn. But with the "GitOps" method, you can manage your entire Kubernetes infrastructure with Git pull requests, giving you a single control interface and making it easy to assess and roll back changes! Billy Yuen, Alexander Matyushentsev, Todd Ekenstam, and Jesse Suen are principal engineers for the Intuit platform. They are widely recognized as industry leads in GitOps for Kubernetes, having presented numerous related talks at industry conferences.
This book is a timely and critical introduction for those interested in what data science is (and isn't), and how it should be applied. The language is conversational and the content is accessible for readers without a quantitative or computational background; but, at the same time, it is also a practical overview of the field for the more technical readers. The overarching goal is to demystify the field and teach the reader how to develop an analytical mindset instead of following recipes. The book takes the scientist's approach of focusing on asking the right question at every step as this is the single most important factor contributing to the success of a data science project. Upon finishing this book, the reader should be asking more questions than I have answered. This book is, therefore, a practising scientist's approach to explaining data science through questions and examples.
This book puts military doctrine into a wider perspective, drawing on military history, philosophy, and political science. Military doctrines are institutional beliefs about what works in war; given the trauma of 9/11 and the ensuing 'War on Terror', serious divergences over what the message of the 'new' military doctrine ought to be were expected around the world. However, such questions are often drowned in ferocious meta-doctrinal disagreements. What is a doctrine, after all? This book provides a theoretical understanding of such questions. Divided into three parts, the author investigates the historical roots of military doctrine and explores its growth and expansion until the present day, and goes on to analyse the main characteristics of a military doctrine. Using a multidisciplinary approach, the book concludes that doctrine can be utilized in three key ways: as a tool of command, as a tool of change, and as a tool of education. This book will be of much interest to students of military studies, civil-military relations, strategic studies, and war studies, as well as to students in professional military education.
The SPSS Survival Manual throws a lifeline to students and researchers grappling with this powerful data analysis software. In her bestselling guide, Julie Pallant takes you through the entire research process, helping you choose the right data analysis technique for your project. This edition has been updated to include up to SPSS version 26. From the formulation of research questions, to the design of the study and analysis of data, to reporting the results, Julie discusses basic and advanced statistical techniques. She outlines each technique clearly, with step-by-step procedures for performing the analysis, a detailed guide to interpreting data output and an example of how to present the results in a report. For both beginners and experienced users in Psychology, Sociology, Health Sciences, Medicine, Education, Business and related disciplines, the SPSS Survival Manual is an essential text. It is illustrated throughout with screen grabs, examples of output and tips, and is also further supported by a website with sample data and guidelines on report writing. This seventh edition is fully revised and updated to accommodate changes to IBM SPSS procedures.
Describes the entire data science procedure of how the infectious disease data are collected, curated, visualized, and fed to predictive models, which facilitates effective communication between data sources, scientists, and decision-makers. Describes practical concepts of infectious disease data and provides particular data science perspectives. Overview of the unique features and issues of infectious disease data and how they impact epidemic modeling and projection. Introduces various classes of models and state-of-the-art learning methods to analyze infectious diseases data with valuable insights on how different models and methods could be connected.
Your comprehensive guide to using Xero. Keeping your business running smoothly has never been easier with Xero. You’re in good hands with Xero For Dummies, the only book endorsed by Xero. With the tips and tricks included in this helpful guide, you can easily tackle tasks like accounts payable, invoices, and estimates. It’s packed with easy to follow explanations and instructions on how to use this popular accounting software. It’s like having a personal accountant at your fingertips! The latest update to this useful reference shows how you can use Xero for more than a simple spreadsheet. It includes how to set up your account from scratch, convert your business from another accounting software to Xero, and use Xero to its full potential. It includes these essential topics:
Filled with real-world scenarios that shows how you can use Xero every day in your business, Xero For Dummies can help you get your paperwork done quickly, so you can spend your valuable time running your business. Pick up your copy of Xero For Dummies to make that your reality.
Inspired by the author's need for practical guidance in the processes of data analysis, "A Practical Guide to Scientific Data Analysis" has been written as a statistical companion for the working scientist. This handbook of data analysis with worked examples focuses on the application of mathematical and statistical techniques and the interpretation of their results. Covering the most common statistical methods for examining and exploring relationships in data, the text includes extensive examples from a variety of scientific disciplines. The chapters are organised logically, from planning an experiment, through examining and displaying the data, to constructing quantitative models. Each chapter is intended to stand alone so that casual users can refer to the section that is most appropriate to their problem. Written by a highly qualified and internationally respected author this text: Presents statistics for the non-statisticianExplains a variety of methods to extract information from dataDescribes the application of statistical methods to the design of "performance chemicals"Emphasises the application of statistical techniques and the interpretation of their results Of practical use to chemists, biochemists, pharmacists, biologists and researchers from many other scientific disciplines in both industry and academia.
This book presents the statistical analysis of compositional data using the log-ratio approach. It includes a wide range of classical and robust statistical methods adapted for compositional data analysis, such as supervised and unsupervised methods like PCA, correlation analysis, classification and regression. In addition, it considers special data structures like high-dimensional compositions and compositional tables. The methodology introduced is also frequently compared to methods which ignore the specific nature of compositional data. It focuses on practical aspects of compositional data analysis rather than on detailed theoretical derivations, thus issues like graphical visualization and preprocessing (treatment of missing values, zeros, outliers and similar artifacts) form an important part of the book. Since it is primarily intended for researchers and students from applied fields like geochemistry, chemometrics, biology and natural sciences, economics, and social sciences, all the proposed methods are accompanied by worked-out examples in R using the package robCompositions.
Most biologists use nonlinear regression more than any other statistical technique, but there are very few places to learn about curve-fitting. This book, by the author of the very successful Intuitive Biostatistics, addresses this relatively focused need of an extraordinarily broad range of scientists.
Computer science graduates often find software engineering knowledge and skills are more in demand after they join the industry. However, given the lecture-based curriculum present in academia, it is not an easy undertaking to deliver industry-standard knowledge and skills in a software engineering classroom as such lectures hardly engage or convince students. Overcoming Challenges in Software Engineering Education: Delivering Non-Technical Knowledge and Skills combines recent advances and best practices to improve the curriculum of software engineering education. This book is an essential reference source for researchers and educators seeking to bridge the gap between industry expectations and what academia can provide in software engineering education.
This book is designed primarily for upper level undergraduate and graduate level students taking a course in multilevel modelling and/or statistical modelling with a large multilevel modelling component. The focus is on presenting the theory and practice of major multilevel modelling techniques in a variety of contexts, using Mplus as the software tool, and demonstrating the various functions available for these analyses in Mplus, which is widely used by researchers in various fields, including most of the social sciences. In particular, Mplus offers users a wide array of tools for latent variable modelling, including for multilevel data.
Microsoft Power BI is a data analytics and visualization tool powerful enough for the most demanding data scientists, but accessible enough for everyday use for anyone who needs to get more from data. The market has many books designed to train and equip professional data analysts to use Power BI, but few of them make this tool accessible to anyone who wants to get up to speed on their own. This streamlined intro to Power BI covers all the foundational aspects and features you need to go from "zero to hero" with data and visualizations. Whether you work with large, complex datasets or work in Microsoft Excel, author Jeremey Arnold shows you how to teach yourself Power BI and use it confidently as a regular data analysis and reporting tool. You'll learn how to: Import, manipulate, visualize, and investigate data in Power BI Approach solutions for both self-service and enterprise BI Use Power BI in your organization's business intelligence strategy Produce effective reports and dashboards Create environments for sharing reports and managing data access with your team Determine the right solution for using Power BI offerings based on size, security, and computational needs
Multivariate Survival Analysis and Competing Risks introduces univariate survival analysis and extends it to the multivariate case. It covers competing risks and counting processes and provides many real-world examples, exercises, and R code. The text discusses survival data, survival distributions, frailty models, parametric methods, multivariate data and distributions, copulas, continuous failure, parametric likelihood inference, and non- and semi-parametric methods. There are many books covering survival analysis, but very few that cover the multivariate case in any depth. Written for a graduate-level audience in statistics/biostatistics, this book includes practical exercises and R code for the examples. The author is renowned for his clear writing style, and this book continues that trend. It is an excellent reference for graduate students and researchers looking for grounding in this burgeoning field of research. |
![]() ![]() You may like...
Returning to Ionia - A Story of Love and…
Constantine Santas
Hardcover
Designing Networks for Innovation and…
Matthaus P. Zylka, Hauke Fuehres, …
Hardcover
Opera and the Enlightenment
Thomas Bauman, Marita Petzoldt McClymonds
Hardcover
|