![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages
The richly illustrated Interactive Web-Based Data Visualization with R, plotly, and shiny focuses on the process of programming interactive web graphics for multidimensional data analysis. It is written for the data analyst who wants to leverage the capabilities of interactive web graphics without having to learn web programming. Through many R code examples, you will learn how to tap the extensive functionality of these tools to enhance the presentation and exploration of data. By mastering these concepts and tools, you will impress your colleagues with your ability to quickly generate more informative, engaging, and reproducible interactive graphics using free and open source software that you can share over email, export to pdf, and more. Key Features: Convert static ggplot2 graphics to an interactive web-based form Link, animate, and arrange multiple plots in standalone HTML from R Embed, modify, and respond to plotly graphics in a shiny app Learn best practices for visualizing continuous, discrete, and multivariate data Learn numerous ways to visualize geo-spatial data This book makes heavy use of plotly for graphical rendering, but you will also learn about other R packages that support different phases of a data science workflow, such as tidyr, dplyr, and tidyverse. Along the way, you will gain insight into best practices for visualization of high-dimensional data, statistical graphics, and graphical perception. The printed book is complemented by an interactive website where readers can view movies demonstrating the examples and interact with graphics.
This book systematically addresses the design and analysis of efficient techniques for independent random sampling. Both general-purpose approaches, which can be used to generate samples from arbitrary probability distributions, and tailored techniques, designed to efficiently address common real-world practical problems, are introduced and discussed in detail. In turn, the monograph presents fundamental results and methodologies in the field, elaborating and developing them into the latest techniques. The theory and methods are illustrated with a varied collection of examples, which are discussed in detail in the text and supplemented with ready-to-run computer code. The main problem addressed in the book is how to generate independent random samples from an arbitrary probability distribution with the weakest possible constraints or assumptions in a form suitable for practical implementation. The authors review the fundamental results and methods in the field, address the latest methods, and emphasize the links and interplay between ostensibly diverse techniques.
Financial, Macro and Micro Econometrics Using R, Volume 42, provides state-of-the-art information on important topics in econometrics, including multivariate GARCH, stochastic frontiers, fractional responses, specification testing and model selection, exogeneity testing, causal analysis and forecasting, GMM models, asset bubbles and crises, corporate investments, classification, forecasting, nonstandard problems, cointegration, financial market jumps and co-jumps, among other topics.
Interactive Graphics for Data Analysis: Principles and Examples discusses exploratory data analysis (EDA) and how interactive graphical methods can help gain insights as well as generate new questions and hypotheses from datasets. Fundamentals of Interactive Statistical GraphicsThe first part of the book summarizes principles and methodology, demonstrating how the different graphical representations of variables of a dataset are effectively used in an interactive setting. The authors introduce the most important plots and their interactive controls. They also examine various types of data, relations between variables, and plot ensembles. Case Studies Illustrate the PrinciplesThe second section focuses on nine case studies. Each case study describes the background, lists the main goals of the analysis and the variables in the dataset, shows what further numerical procedures can add to the graphical analysis, and summarizes important findings. Wherever applicable, the authors also provide the numerical analysis for datasets found in Cox and Snell's landmark book. Understand How to Analyze Data through Graphical Means This full-color text shows that interactive graphical methods complement the traditional statistical toolbox to achieve more complete, easier to understand, and easier to interpret analyses.
This book focuses on computer intensive statistical methods, such as validation, model selection, and bootstrap, that help overcome obstacles that could not be previously solved by methods such as regression and time series modelling in the areas of economics, meteorology, and transportation.
Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs. The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression, L1 and q-quantile regression, regression in a spatial domain, ridge regression, semiparametric regression, nonlinear least squares, and time-series regression issues. For most of the regression methods, the author includes SAS procedure code, enabling readers to promptly perform their own regression runs. A Comprehensive, Accessible Source on Regression Methodology and ModelingRequiring only basic knowledge of statistics and calculus, this book discusses how to use regression analysis for decision making and problem solving. It shows readers the power and diversity of regression techniques without overwhelming them with calculations.
Thoroughly revised and updated, The Art of Modeling in Science and Engineering with Mathematica (R), Second Edition explores the mathematical tools and procedures used in modeling based on the laws of conservation of mass, energy, momentum, and electrical charge. The authors have culled and consolidated the best from the first edition and expanded the range of applied examples to reach a wider audience. The text proceeds, in measured steps, from simple models of real-world problems at the algebraic and ordinary differential equations (ODE) levels to more sophisticated models requiring partial differential equations. The traditional solution methods are supplemented with Mathematica , which is used throughout the text to arrive at solutions for many of the problems presented. The text is enlivened with a host of illustrations and practice problems drawn from classical and contemporary sources. They range from Thomson's famous experiment to determine e/m and Euler's model for the buckling of a strut to an analysis of the propagation of emissions and the performance of wind turbines. The mathematical tools required are first explained in separate chapters and then carried along throughout the text to solve and analyze the models. Commentaries at the end of each illustration draw attention to the pitfalls to be avoided and, perhaps most important, alert the reader to unexpected results that defy conventional wisdom. These features and more make the book the perfect tool for resolving three common difficulties: the proper choice of model, the absence of precise solutions, and the need to make suitable simplifying assumptions and approximations. The book covers a wide range of physical processes and phenomena drawn from various disciplines and clearly illuminates the link between the physical system being modeled and the mathematical expression that results.
"If mathematical modeling is the process of turning real phenomena into mathematical abstractions, then numerical computation is largely about the transformation from abstract mathematics to concrete reality. Many science and engineering disciplines have long benefited from the tremendous value of the correspondence between quantitative information and mathematical manipulation." -from the Preface Fundamentals of Numerical Computation is an advanced undergraduate-level introduction to the mathematics and use of algorithms for the fundamental problems of numerical computation: linear algebra, finding roots, approximating data and functions, and solving differential equations. The book is organized with simpler methods in the first half and more advanced methods in the second half, allowing use for either a single course or a sequence of two courses. The authors take readers from basic to advanced methods, illustrating them with over 200 self-contained MATLAB functions and examples designed for those with no prior MATLAB experience. Although the text provides many examples, exercises, and illustrations, the aim of the authors is not to provide a cookbook per se, but rather an exploration of the principles of cooking. Professors Driscoll and Braun have developed an online resource that includes well-tested materials related to every chapter. Among these materials are lecture-related slides and videos, ideas for student projects, laboratory exercises, computational examples and scripts, and all the functions presented in the book.
Your team is stressed; priorities are unclear. You're not sure what your teammates are working on, and management isn't helping. If your team is struggling with any of these symptoms, these four case studies will guide you to project success. See how Kanban was used to significantly improve time to market and to create a shared focus across marketing, IT, and operations. Each case study comes with illustrations of the Kanban board and diagrams and graphs to help you see behind the scenes. Learn a Lean approach by seeing how Kanban made a difference in four real-world situations. You'll explore how four different teams used Kanban to make paradigm-changing improvements in software development. These teams were struggling with overwork, unclear priorities, and lack of direction. As you discover what worked for them, you'll understand how to make significant changes in real situations.The four case studies in this book explain how to: * Improve the full value chain by using Enterprise Kanban * Boost engagement, teamwork, and flow in change management and operations * Save a derailing project with Kanban * Help an office team outside IT keep up with growth using Kanban What seems easy in theory can become tangled in practice. Discover why "improving IT" can make you miss your biggest improvement opportunities, and why you should focus on fixing quality and front-end operations before IT. Discover how to keep long-term focus and improve across department borders while dealing with everyday challenges. Find out what happened when using Kanban to find better ways to do work in a well-established company, including running multi-team development without a project office. You'll inspire your team and engage management to make it easier to develop better products. What You Need: This is a case study book, so there are no software requirements. The book covers the relevant bits of theory before presenting the case studies.
The main focus of this book is on presenting advances in fuzzy statistics, and on proposing a methodology for testing hypotheses in the fuzzy environment based on the estimation of fuzzy confidence intervals, a context in which not only the data but also the hypotheses are considered to be fuzzy. The proposed method for estimating these intervals is based on the likelihood method and employs the bootstrap technique. A new metric generalizing the signed distance measure is also developed. In turn, the book presents two conceptually diverse applications in which defended intervals play a role: one is a novel methodology for evaluating linguistic questionnaires developed at the global and individual levels; the other is an extension of the multi-ways analysis of variance to the space of fuzzy sets. To illustrate these approaches, the book presents several empirical and simulation-based studies with synthetic and real data sets. In closing, it presents a coherent R package called "FuzzySTs" which covers all the previously mentioned concepts with full documentation and selected use cases. Given its scope, the book will be of interest to all researchers whose work involves advanced fuzzy statistical methods.
Without question, statistics is one of the most challenging courses for students in the social and behavioral sciences. Enrolling in their first statistics course, students are often apprehensive or extremely anxious toward the subject matter. And while IBM SPSS is one of the more easy-to-use statistical software programs available, for anxious students who realize they not only have to learn statistics but also new software, the task can seem insurmountable. Keenly aware of students' anxiety with statistics (and the fact that this anxiety can affect performance), Ronald D. Yockey has written SPSS Demystified: A Simple Guide and Reference, now in its fourth edition. Through a comprehensive, step-by-step approach, this text is consistently and specifically designed to both alleviate anxiety toward the subject matter and build a successful experience analyzing data in SPSS. Topics covered in the text are appropriate for most introductory and intermediate statistics and research methods courses. Key features of the text: Step-by-step instruction and screenshots Designed to be hands-on with the user performing the analyses alongside on their computer as they read through each chapter Call-out boxes provided, highlighting important information as appropriate SPSS output explained, with written results provided using the popular, widely recognized APA format End-of-chapter exercises included, allowing for additional practice SPSS datasets available on the publisher's website New to the Fourth Edition: Fully updated to SPSS 28 Updated screenshots in full color to reflect changes in SPSS software system (version 28) Exercises updated with up-to-date examples Exact p-values provided (consist with APA recommendations)
The Second Bayesian Young Statisticians Meeting (BAYSM 2014) and the research presented here facilitate connections among researchers using Bayesian Statistics by providing a forum for the development and exchange of ideas. WU Vienna University of Business and Economics hosted BAYSM 2014 from September 18th to the 19th. The guidance of renowned plenary lecturers and senior discussants is a critical part of the meeting and this volume, which follows publication of contributions from BAYSM 2013. The meeting's scientific program reflected the variety of fields in which Bayesian methods are currently employed or could be introduced in the future. Three brilliant keynote lectures by Chris Holmes (University of Oxford), Christian Robert (Universite Paris-Dauphine), and Mike West (Duke University), were complemented by 24 plenary talks covering the major topics Dynamic Models, Applications, Bayesian Nonparametrics, Biostatistics, Bayesian Methods in Economics, and Models and Methods, as well as a lively poster session with 30 contributions. Selected contributions have been drawn from the conference for this book. All contributions in this volume are peer-reviewed and share original research in Bayesian computation, application, and theory.
All scheduling software is difficult to learn for a number of reasons. None have the optimal settings when installed and templates, views and default options need to be adjusted to obtain the best possible performance. Usually the Help files do not connect the user to real life situations and do not explain the practical use of functions. Furthermore, there are many flicks and switches with obscure names that are difficult to understand or decide what they do or which are important. These issues make learning the software very difficult without a comprehensive guide written by an experienced user. Investing in a book written by Paul E Harris will address all these issues and allow you to setup the software properly and understand all the obscure functions letting you become productive more quickly and enhance your career opportunities and salary with a solid understanding of the software. Microsoft (R) Project 2021 is a minor update of Microsoft (R) Project 2019 and therefore this book covers versions 2013, 2016, 2019 2021 and 365. This book is aimed at showing project management professionals how to use the software in a project environment. This book is an update of the author's last book "Planning and Scheduling using Microsoft (R) Project 2013, 2016 and 21. It has revised workshops and incudes the new functions of Microsoft Project 2021. This publication was written so it may be used as: * A training manual, or * A self teach book, or * A user guide. The book stays focused on the information required to create and update a schedule with or without resources using Microsoft (R) Project by: * Concentrating on the core functions required to plan and control a project. * Keeping the information relevant to each topic in the appropriate chapter. * Providing a quick reference at the start of each chapter listing the chapter topics. * Providing a comprehensive index of all topics. The book is aimed at: * Project managers and schedulers who wish learn the software, however are unable to attend a training course, or require a reference book. * Project management companies in industries such as building, construction, oil & gas, software development, government and defence who wish to run their own software training courses or provide their employees a good practical guide to using the software. * Training organizations who require a training manual to run their own courses. This book is written by an experienced scheduler, who has used the software at the sharp end of projects and is not a techo. It draws on the author's practical experience in using the software in a wide variety of industries. It presents workable solutions to real day to day planning and scheduling problems and contains practical advice on how to set up the software and import data.
Most transformations and large-scale change programs fail, but in a rapidly changing world change is becoming more and more critical for survival. The HERO Transformation Playbook is your step-by-step playbook of EXACTLY how to deliver successful transformations and large-scale change programs with the best chance of success using the HERO Transformation Framework: a clear method to help you design transformation for maximum enterprise value creation and then deliver the outcome in a repeatable fashion. We built our framework through trial and error, learning from our mistakes and successes and solving common issues we came across and pitfalls that we have seen time and again. We then spent many years honing the framework, removing the fluff, distilling the concepts until it contained everything you need to succeed in the challenging world of change. In this book we teach you everything we've learned - including all of the roles, processes, meetings, governance, and templates for you to follow and apply to your transformation today - so that you can crack the code of change and lead successful transformations on your own. The more successful transformations that are delivered, the better the world will be for everyone!
Compositional Data Analysis in Practice is a user-oriented practical guide to the analysis of data with the property of a constant sum, for example percentages adding up to 100%. Compositional data can give misleading results if regular statistical methods are applied, and are best analysed by first transforming them to logarithms of ratios. This book explains how this transformation affects the analysis, results and interpretation of this very special type of data. All aspects of compositional data analysis are considered: visualization, modelling, dimension-reduction, clustering and variable selection, with many examples in the fields of food science, archaeology, sociology and biochemistry, and a final chapter containing a complete case study using fatty acid compositions in ecology. The applicability of these methods extends to other fields such as linguistics, geochemistry, marketing, economics and finance. R Software The following repository contains data files and R scripts from the book https://github.com/michaelgreenacre/CODAinPractice. The R package easyCODA, which accompanies this book, is available on CRAN -- note that you should have version 0.25 or higher. The latest version of the package will always be available on R-Forge and can be installed from R with this instruction: install.packages("easyCODA", repos="http://R-Forge.R-project.org").
What happens when a researcher and a practitioner spend hours crammed in a Fiat discussing data visualization? Beyond creating beautiful charts, they found greater richness in the craft as an integrated whole. Drawing from their unconventional backgrounds, these two women take readers through a journey around perception, semantics, and intent as the triad that influences visualization. This visually engaging book blends ideas from theory, academia, and practice to craft beautiful, yet meaningful visualizations and dashboards. How do you take your visualization skills to the next level? The book is perfect for analysts, research and data scientists, journalists, and business professionals. Functional Aesthetics for Data Visualization is also an indispensable resource for just about anyone curious about seeing and understanding data. Think of it as a coffee book for the data geek in you. https: //www.functionalaestheticsbook.com
* Targests readers with a background in programming, interested in an introduction/refresher in statistical hypothesis testing * Uses Python throughout * Provides the reader with the opportunity of using the book whenever needed rather than following a sequential path.
You've decided to tackle machine learning - because you're job hunting, embarking on a new project, or just think self-driving cars are cool. But where to start? It's easy to be intimidated, even as a software developer. The good news is that it doesn't have to be that hard. Master machine learning by writing code one line at a time, from simple learning programs all the way to a true deep learning system. Tackle the hard topics by breaking them down so they're easier to understand, and build your confidence by getting your hands dirty. Peel away the obscurities of machine learning, starting from scratch and going all the way to deep learning. Machine learning can be intimidating, with its reliance on math and algorithms that most programmers don't encounter in their regular work. Take a hands-on approach, writing the Python code yourself, without any libraries to obscure what's really going on. Iterate on your design, and add layers of complexity as you go. Build an image recognition application from scratch with supervised learning. Predict the future with linear regression. Dive into gradient descent, a fundamental algorithm that drives most of machine learning. Create perceptrons to classify data. Build neural networks to tackle more complex and sophisticated data sets. Train and refine those networks with backpropagation and batching. Layer the neural networks, eliminate overfitting, and add convolution to transform your neural network into a true deep learning system. Start from the beginning and code your way to machine learning mastery. What You Need: The examples in this book are written in Python, but don't worry if you don't know this language: you'll pick up all the Python you need very quickly. Apart from that, you'll only need your computer, and your code-adept brain.
This is the first book of its kind which teaches matrix algebra, allowing the student to learn the material by actually working with matrix objects in modern computer environment of R. Instead of a calculator, R is a vastly more powerful free software and graphics system. The book provides a comprehensive overview of matrix theory without being bogged down in proofs or tedium. The reader can check each matrix result with numerical examples of exactly what they mean and understand their implications. The book does not shy away from advanced topics, especially the ones with practical applications.
Enterprise Level Security 2: Advanced Topics in an Uncertain World follows on from the authors' first book on Enterprise Level Security (ELS), which covered the basic concepts of ELS and the discoveries made during the first eight years of its development. This book follows on from this to give a discussion of advanced topics and solutions, derived from 16 years of research, pilots, and operational trials in putting an enterprise system together. The chapters cover specific advanced topics derived from painful mistakes and numerous revisions of processes. This book covers many of the topics omitted from the first book including multi-factor authentication, cloud key management, enterprise change management, entity veracity, homomorphic computing, device management, mobile ad hoc, big data, mediation, and several other topics. The ELS model of enterprise security is endorsed by the Secretary of the Air Force for Air Force computing systems and is a candidate for DoD systems under the Joint Information Environment Program. The book is intended for enterprise IT architecture developers, application developers, and IT security professionals. This is a unique approach to end-to-end security and fills a niche in the market.
Using the same accessible, hands-on approach as its best-selling predecessor, the Handbook of Univariate and Multivariate Data Analysis with IBM SPSS, Second Edition explains how to apply statistical tests to experimental findings, identify the assumptions underlying the tests, and interpret the findings. This second edition now covers more topics and has been updated with the SPSS statistical package for Windows. New to the Second Edition Three new chapters on multiple discriminant analysis, logistic regression, and canonical correlation New section on how to deal with missing data Coverage of tests of assumptions, such as linearity, outliers, normality, homogeneity of variance-covariance matrices, and multicollinearity Discussions of the calculation of Type I error and the procedure for testing statistical significance between two correlation coefficients obtained from two samples Expanded coverage of factor analysis, path analysis (test of the mediation hypothesis), and structural equation modeling Suitable for both newcomers and seasoned researchers in the social sciences, the handbook offers a clear guide to selecting the right statistical test, executing a wide range of univariate and multivariate statistical tests via the Windows and syntax methods, and interpreting the output results. The SPSS syntax files used for executing the statistical tests can be found in the appendix. Data sets employed in the examples are available on the book's CRC Press web page.
With the advancement of statistical methodology inextricably linked to the use of computers, new methodological ideas must be translated into usable code and then numerically evaluated relative to competing procedures. In response to this, Statistical Computing in C++ and R concentrates on the writing of code rather than the development and study of numerical algorithms per se. The book discusses code development in C++ and R and the use of these symbiotic languages in unison. It emphasizes that each offers distinct features that, when used in tandem, can take code writing beyond what can be obtained from either language alone. The text begins with some basics of object-oriented languages, followed by a "boot-camp" on the use of C++ and R. The authors then discuss code development for the solution of specific computational problems that are relevant to statistics including optimization, numerical linear algebra, and random number generation. Later chapters introduce abstract data structures (ADTs) and parallel computing concepts. The appendices cover R and UNIX Shell programming. Features Includes numerous student exercises ranging from elementary to challenging Integrates both C++ and R for the solution of statistical computing problems Uses C++ code in R and R functions in C++ programs Provides downloadable programs, available from the authors' website The translation of a mathematical problem into its computational analog (or analogs) is a skill that must be learned, like any other, by actively solving relevant problems. The text reveals the basic principles of algorithmic thinking essential to the modern statistician as well as the fundamental skill of communicating with a computer through the use of the computer languages C++ and R. The book lays the foundation for original code development in a research environment.
This book is a timely and critical introduction for those interested in what data science is (and isn't), and how it should be applied. The language is conversational and the content is accessible for readers without a quantitative or computational background; but, at the same time, it is also a practical overview of the field for the more technical readers. The overarching goal is to demystify the field and teach the reader how to develop an analytical mindset instead of following recipes. The book takes the scientist's approach of focusing on asking the right question at every step as this is the single most important factor contributing to the success of a data science project. Upon finishing this book, the reader should be asking more questions than I have answered. This book is, therefore, a practising scientist's approach to explaining data science through questions and examples.
There is no shortage of incentives to study and reduce poverty in our societies. Poverty is studied in economics and political sciences, and population surveys are an important source of information about it. The design and analysis of such surveys is principally a statistical subject matter and the computer is essential for their data compilation and processing. Focusing on The European Union Statistics on Income and Living Conditions (EU-SILC), a program of annual national surveys which collect data related to poverty and social exclusion, Statistical Studies of Income, Poverty and Inequality in Europe: Computing and Graphics in R presents a set of statistical analyses pertinent to the general goals of EU-SILC. The contents of the volume are biased toward computing and statistics, with reduced attention to economics, political and other social sciences. The emphasis is on methods and procedures as opposed to results, because the data from annual surveys made available since publication and in the near future will degrade the novelty of the data used and the results derived in this volume. The aim of this volume is not to propose specific methods of analysis, but to open up the analytical agenda and address the aspects of the key definitions in the subject of poverty assessment that entail nontrivial elements of arbitrariness. The presented methods do not exhaust the range of analyses suitable for EU-SILC, but will stimulate the search for new methods and adaptation of established methods that cater to the identified purposes.
A "how to" guide for applying statistical methods to biomarker data analysis Presenting a solid foundation for the statistical methods that are used to analyze biomarker data, Analysis of Biomarker Data: A Practical Guide features preferred techniques for biomarker validation. The authors provide descriptions of select elementary statistical methods that are traditionally used to analyze biomarker data with a focus on the proper application of each method, including necessary assumptions, software recommendations, and proper interpretation of computer output. In addition, the book discusses frequently encountered challenges in analyzing biomarker data and how to deal with them, methods for the quality assessment of biomarkers, and biomarker study designs. Covering a broad range of statistical methods that have been used to analyze biomarker data in published research studies, Analysis of Biomarker Data: A Practical Guide also features: A greater emphasis on the application of methods as opposed to the underlying statistical and mathematical theory The use of SAS(R), R, and other software throughout to illustrate the presented calculations for each example Numerous exercises based on real-world data as well as solutions to the problems to aid in reader comprehension The principles of good research study design and the methods for assessing the quality of a newly proposed biomarker A companion website that includes a software appendix with multiple types of software and complete data sets from the book's examples Analysis of Biomarker Data: A Practical Guide is an ideal upper-undergraduate and graduate-level textbook for courses in the biological or environmental sciences. An excellent reference for statisticians who routinely analyze and interpret biomarker data, the book is also useful for researchers who wish to perform their own analyses of biomarker data, such as toxicologists, pharmacologists, epidemiologists, environmental and clinical laboratory scientists, and other professionals in the health and environmental sciences. |
![]() ![]() You may like...
Transactions of the Linnean Society of…
Linnean Society of London
Paperback
R639
Discovery Miles 6 390
Dynamic Data Analysis - Modeling Data…
James Ramsay, Giles Hooker
Hardcover
R4,247
Discovery Miles 42 470
|