![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
R for Cloud Computing looks at some of the tasks performed by business analysts on the desktop (PC era) and helps the user navigate the wealth of information in R and its 4000 packages as well as transition the same analytics using the cloud. With this information the reader can select both cloud vendors and the sometimes confusing cloud ecosystem as well as the R packages that can help process the analytical tasks with minimum effort, cost and maximum usefulness and customization. The use of Graphical User Interfaces (GUI) and Step by Step screenshot tutorials is emphasized in this book to lessen the famous learning curve in learning R and some of the needless confusion created in cloud computing that hinders its widespread adoption. This will help you kick-start analytics on the cloud including chapters on both cloud computing, R, common tasks performed in analytics including the current focus and scrutiny of Big Data Analytics, setting up and navigating cloud providers. Readers are exposed to a breadth of cloud computing choices and analytics topics without being buried in needless depth. The included references and links allow the reader to pursue business analytics on the cloud easily. It is aimed at practical analytics and is easy to transition from existing analytical set up to the cloud on an open source system based primarily on R. This book is aimed at industry practitioners with basic programming skills and students who want to enter analytics as a profession. Note the scope of the book is neither statistical theory nor graduate level research for statistics, but rather it is for business analytics practitioners. It will also help researchers and academics but at a practical rather than conceptual level. The R statistical software is the fastest growing analytics platform in the world, and is established in both academia and corporations for robustness, reliability and accuracy. The cloud computing paradigm is firmly established as the next generation of computing from microprocessors to desktop PCs to cloud.
The matrix laboratory interactive computing environment MATLAB has brought creativity to research in diverse disciplines, particularly in designing and programming experiments. More commonly used in mathematics and the sciences, it also lends itself to a variety of applications across the field of psychology. For the novice looking to use it in experimental psychology research, though, becoming familiar with MATLAB can be a daunting task. "MATLAB for Psychologists"expertly guides readers through the component steps, skills, and operations of the software, with plentiful graphics and examples to match the reader s comfort level. Using an extended illustration, this concise volume explains the program s usefulness at any point in an experiment, without the limits imposed by other types of software. And the authors demonstrate the responsiveness of MATLAB to the individual s research needs, whether the task is programming experiments, creating sensory stimuli, running simulations, or calculating statistics for data analysis. Key features of the coverage: Thinking in a matrix way.Handling and plotting data.Guidelines for improved programming, sound, and imaging.Statistical analysis and signal detection theory indexes.The Graphical User Interface.The Psychophysics Toolbox. "MATLAB for Psychologists"serves a wide audience of advanced undergraduate and graduate level psychology students, professors, and researchers as well as lab technicians involved in programming psychology experiments."
Now in its second edition, this textbook provides an introduction to Python and its use for statistical data analysis. It covers common statistical tests for continuous, discrete and categorical data, as well as linear regression analysis and topics from survival analysis and Bayesian statistics. For this new edition, the introductory chapters on Python, data input and visualization have been reworked and updated. The chapter on experimental design has been expanded, and programs for the determination of confidence intervals commonly used in quality control have been introduced. The book also features a new chapter on finding patterns in data, including time series. A new appendix describes useful programming tools, such as testing tools, code repositories, and GUIs. The provided working code for Python solutions, together with easy-to-follow examples, will reinforce the reader's immediate understanding of the topic. Accompanying data sets and Python programs are also available online. With recent advances in the Python ecosystem, Python has become a popular language for scientific computing, offering a powerful environment for statistical data analysis. With examples drawn mainly from the life and medical sciences, this book is intended primarily for masters and PhD students. As it provides the required statistics background, the book can also be used by anyone who wants to perform a statistical data analysis.
Free Mathematica 10 Update Included! Now available from www.wiley.com/go/magrab Updated material includes: - Creating regions and volumes of arbitrary shape and determining their properties: arc length, area, centroid, and area moment of inertia - Performing integrations, solving equations, and determining the maximum and minimum values over regions of arbitrary shape - Solving numerically a class of linear second order partial differential equations in regions of arbitrary shape using finite elements An Engineer's Guide to Mathematica enables the reader to attain the skills to create Mathematica 9 programs that solve a wide range of engineering problems and that display the results with annotated graphics. This book can be used to learn Mathematica, as a companion to engineering texts, and also as a reference for obtaining numerical and symbolic solutions to a wide range of engineering topics. The material is presented in an engineering context and the creation of interactive graphics is emphasized. The first part of the book introduces Mathematica's syntax and commands useful in solving engineering problems. Tables are used extensively to illustrate families of commands and the effects that different options have on their output. From these tables, one can easily determine which options will satisfy one's current needs. The order of the material is introduced so that the engineering applicability of the examples increases as one progresses through the chapters. The second part of the book obtains solutions to representative classes of problems in a wide range of engineering specialties. Here, the majority of the solutions are presented as interactive graphics so that the results can be explored parametrically. Key features: * Material is based on Mathematica 9 * Presents over 85 examples on a wide range of engineering topics, including vibrations, controls, fluids, heat transfer, structures, statistics, engineering mathematics, and optimization * Each chapter contains a summary table of the Mathematica commands used for ease of reference * Includes a table of applications summarizing all of the engineering examples presented. * Accompanied by a website containing Mathematica notebooks of all the numbered examples An Engineer's Guide to Mathematica is a must-have reference for practitioners, and graduate and undergraduate students who want to learn how to solve engineering problems with Mathematica.
Learn How to Program Stochastic Models Highly recommended, the best-selling first edition of Introduction to Scientific Programming and Simulation Using R was lauded as an excellent, easy-to-read introduction with extensive examples and exercises. This second edition continues to introduce scientific programming and stochastic modelling in a clear, practical, and thorough way. Readers learn programming by experimenting with the provided R code and data. The book's four parts teach: Core knowledge of R and programming concepts How to think about mathematics from a numerical point of view, including the application of these concepts to root finding, numerical integration, and optimisation Essentials of probability, random variables, and expectation required to understand simulation Stochastic modelling and simulation, including random number generation and Monte Carlo integration In a new chapter on systems of ordinary differential equations (ODEs), the authors cover the Euler, midpoint, and fourth-order Runge-Kutta (RK4) schemes for solving systems of first-order ODEs. They compare the numerical efficiency of the different schemes experimentally and show how to improve the RK4 scheme by using an adaptive step size. Another new chapter focuses on both discrete- and continuous-time Markov chains. It describes transition and rate matrices, classification of states, limiting behaviour, Kolmogorov forward and backward equations, finite absorbing chains, and expected hitting times. It also presents methods for simulating discrete- and continuous-time chains as well as techniques for defining the state space, including lumping states and supplementary variables. Building readers' statistical intuition, Introduction to Scientific Programming and Simulation Using R, Second Edition shows how to turn algorithms into code. It is designed for those who want to make tools, not just use them. The code and data are available for download from CRAN.
This is the first book to show the capabilities of Microsoft Excel to teach biological and life sciences statistics effectively. It is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical science problems. If understanding statistics isn t your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you. Excel, a widely available computer program for students and managers, is also an effective teaching and learning tool for quantitative analyses in science courses. Its powerful computational ability and graphical functions make learning statistics much easier than in years past. However, "Excel 2010 for Biological and Life Sciences Statistics: A Guide to Solving Practical Problems" is the first book to capitalize on these improvements by teaching students and managers how to apply Excel to statistical techniques necessary in their courses and work. Each chapter explains statistical formulas and directs the reader to use Excel commands to solve specific, easy-to-understand science problems. Practice problems are provided at the end of each chapter with their solutions in an appendix. Separately, there is a full Practice Test (with answers in an Appendix) that allows readers to test what they have learned. "
An overview of the theory and application of linear and nonlinear mixed-effects models in the analysis of grouped data, such as longitudinal data, repeated measures, and multilevel data. The authors present a unified model-building strategy for both models and apply this to the analysis of over 20 real datasets from a wide variety of areas, including pharmacokinetics, agriculture, and manufacturing. Much emphasis is placed on the use of graphical displays at the various phases of the model-building process, starting with exploratory plots of the data and concluding with diagnostic plots to assess the adequacy of a fitted model. The NLME library for analyzing mixed-effects models in S and S-PLUS, developed by the authors, provides the underlying software for implementing the methods presented. This balanced mix of real data examples, modeling software, and theory makes the book a useful reference for practitioners who use, or intend to use, mixed-effects models in their data analyses. It can also be used as a text for a one-semester graduate-level applied course.
This book evolved from lectures, courses and workshops on missing data and small-area estimation that I presented during my tenure as the ?rst C- pion Fellow (2000-2002). For the Fellowship I proposed these two topics as areas in which the academic statistics could contribute to the development of government statistics, in exchange for access to the operational details and background that would inform the direction and sharpen the focus of a- demic research. After a few years of involvement, I have come to realise that the separation of 'academic' and 'industrial' statistics is not well suited to either party, and their integration is the key to progress in both branches. Most of the work on this monograph was done while I was a visiting l- turer at Massey University, Palmerston North, New Zealand. The hospitality and stimulating academic environment of their Institute of Information S- ence and Technology is gratefully acknowledged. I could not name all those who commented on my lecture notes and on the presentations themselves; apart from them, I want to thank the organisers and silent attendees of all the events, and, with a modicum of reluctance, the 'grey ?gures' who kept inquiring whether I was any nearer the completion of whatever stage I had been foolish enough to attach a date.
This textbook offers an algorithmic introduction to the field of computer algebra. A leading expert in the field, the author guides readers through numerous hands-on tutorials designed to build practical skills and algorithmic thinking. This implementation-oriented approach equips readers with versatile tools that can be used to enhance studies in mathematical theory, applications, or teaching. Presented using Mathematica code, the book is fully supported by downloadable sessions in Mathematica, Maple, and Maxima. Opening with an introduction to computer algebra systems and the basics of programming mathematical algorithms, the book goes on to explore integer arithmetic. A chapter on modular arithmetic completes the number-theoretic foundations, which are then applied to coding theory and cryptography. From here, the focus shifts to polynomial arithmetic and algebraic numbers, with modern algorithms allowing the efficient factorization of polynomials. The final chapters offer extensions into more advanced topics: simplification and normal forms, power series, summation formulas, and integration. Computer Algebra is an indispensable resource for mathematics and computer science students new to the field. Numerous examples illustrate algorithms and their implementation throughout, with online support materials to encourage hands-on exploration. Prerequisites are minimal, with only a knowledge of calculus and linear algebra assumed. In addition to classroom use, the elementary approach and detailed index make this book an ideal reference for algorithms in computer algebra.
The richly illustrated Interactive Web-Based Data Visualization with R, plotly, and shiny focuses on the process of programming interactive web graphics for multidimensional data analysis. It is written for the data analyst who wants to leverage the capabilities of interactive web graphics without having to learn web programming. Through many R code examples, you will learn how to tap the extensive functionality of these tools to enhance the presentation and exploration of data. By mastering these concepts and tools, you will impress your colleagues with your ability to quickly generate more informative, engaging, and reproducible interactive graphics using free and open source software that you can share over email, export to pdf, and more. Key Features: Convert static ggplot2 graphics to an interactive web-based form Link, animate, and arrange multiple plots in standalone HTML from R Embed, modify, and respond to plotly graphics in a shiny app Learn best practices for visualizing continuous, discrete, and multivariate data Learn numerous ways to visualize geo-spatial data This book makes heavy use of plotly for graphical rendering, but you will also learn about other R packages that support different phases of a data science workflow, such as tidyr, dplyr, and tidyverse. Along the way, you will gain insight into best practices for visualization of high-dimensional data, statistical graphics, and graphical perception. The printed book is complemented by an interactive website where readers can view movies demonstrating the examples and interact with graphics.
This book provides a friendly introduction to the paradigm and proposes a broad panorama of killing applications of the Infinity Computer in optimization: radically new numerical algorithms, great theoretical insights, efficient software implementations, and interesting practical case studies. This is the first book presenting to the readers interested in optimization the advantages of a recently introduced supercomputing paradigm that allows to numerically work with different infinities and infinitesimals on the Infinity Computer patented in several countries. One of the editors of the book is the creator of the Infinity Computer, and another editor was the first who has started to use it in optimization. Their results were awarded by numerous scientific prizes. This engaging book opens new horizons for researchers, engineers, professors, and students with interests in supercomputing paradigms, optimization, decision making, game theory, and foundations of mathematics and computer science. "Mathematicians have never been comfortable handling infinities... But an entirely new type of mathematics looks set to by-pass the problem... Today, Yaroslav Sergeyev, a mathematician at the University of Calabria in Italy solves this problem... " MIT Technology Review "These ideas and future hardware prototypes may be productive in all fields of science where infinite and infinitesimal numbers (derivatives, integrals, series, fractals) are used." A. Adamatzky, Editor-in-Chief of the International Journal of Unconventional Computing. "I am sure that the new approach ... will have a very deep impact both on Mathematics and Computer Science." D. Trigiante, Computational Management Science. "Within the grossone framework, it becomes feasible to deal computationally with infinite quantities, in a way that is both new (in the sense that previously intractable problems become amenable to computation) and natural". R. Gangle, G. Caterina, F. Tohme, Soft Computing. "The computational features offered by the Infinity Computer allow us to dynamically change the accuracy of representation and floating-point operations during the flow of a computation. When suitably implemented, this possibility turns out to be particularly advantageous when solving ill-conditioned problems. In fact, compared with a standard multi-precision arithmetic, here the accuracy is improved only when needed, thus not affecting that much the overall computational effort." P. Amodio, L. Brugnano, F. Iavernaro & F. Mazzia, Soft Computing
The book presents the fundamental concepts from asymptotic statistical inference theory, elaborating on some basic large sample optimality properties of estimators and some test procedures. The most desirable property of consistency of an estimator and its large sample distribution, with suitable normalization, are discussed, the focus being on the consistent and asymptotically normal (CAN) estimators. It is shown that for the probability models belonging to an exponential family and a Cramer family, the maximum likelihood estimators of the indexing parameters are CAN. The book describes some large sample test procedures, in particular, the most frequently used likelihood ratio test procedure. Various applications of the likelihood ratio test procedure are addressed, when the underlying probability model is a multinomial distribution. These include tests for the goodness of fit and tests for contingency tables. The book also discusses a score test and Wald's test, their relationship with the likelihood ratio test and Karl Pearson's chi-square test. An important finding is that, while testing any hypothesis about the parameters of a multinomial distribution, a score test statistic and Karl Pearson's chi-square test statistic are identical. Numerous illustrative examples of differing difficulty level are incorporated to clarify the concepts. For better assimilation of the notions, various exercises are included in each chapter. Solutions to almost all the exercises are given in the last chapter, to motivate students towards solving these exercises and to enable digestion of the underlying concepts. The concepts from asymptotic inference are crucial in modern statistics, but are difficult to grasp in view of their abstract nature. To overcome this difficulty, keeping up with the recent trend of using R software for statistical computations, the book uses it extensively, for illustrating the concepts, verifying the properties of estimators and carrying out various test procedures. The last section of the chapters presents R codes to reveal and visually demonstrate the hidden aspects of different concepts and procedures. Augmenting the theory with R software is a novel and a unique feature of the book. The book is designed primarily to serve as a text book for a one semester introductory course in asymptotic statistical inference, in a post-graduate program, such as Statistics, Bio-statistics or Econometrics. It will also provide sufficient background information for studying inference in stochastic processes. The book will cater to the need of a concise but clear and student-friendly book introducing, conceptually and computationally, basics of asymptotic inference.
This book is a text for a one-semester course for upper-level undergraduates and beginning graduate students in engineering, science, and mathematics. Prerequisites are a first course in the theory of ODEs and a survey course in numerical analysis, in addition to specific programming experience, preferably in MATLAB, and knowledge of elementary matrix theory. Professionals will also find that this useful concise reference contains reviews of technical issues and realistic and detailed examples. The programs for the examples are supplied on the accompanying web site and can serve as templates for solving other problems. Each chapter begins with a discussion of the "facts of life" for the problem, mainly by means of examples. Numerical methods for the problem are then developed, but only those methods most widely used. The treatment of each method is brief and technical issues are minimized, but all the issues important in practice and for understaning the codes are discussed. The last part of each chapter is a tutorial that shows how to solve problems by means of small, but realistic, examples.
Six Sigma has arisen in the last two decades as a breakthrough Quality Management Methodology. With Six Sigma, we are solving problems and improving processes using as a basis one of the most powerful tools of human development: the scientific method. For the analysis of data, Six Sigma requires the use of statistical software, being R an Open Source option that fulfills this requirement. R is a software system that includes a programming language widely used in academic and research departments. Nowadays, it is becoming a real alternative within corporate environments. The aim of this book is to show how R can be used as the software tool in the development of Six Sigma projects. The book includes a gentle introduction to Six Sigma and a variety of examples showing how to use R within real situations. It has been conceived as a self contained piece. Therefore, it is addressed not only to Six Sigma practitioners, but also to professionals trying to initiate themselves in this management methodology. The book may be used as a text book as well.
The first part of this title contained all statistical tests that are relevant for starters on SPSS, and included standard parametric and non-parametric tests for continuous and binary variables, regression methods, trend tests, and reliability and validity assessments of diagnostic tests. The current part 2 of this title reviews multistep methods, multivariate models, assessments of missing data, performance of diagnostic tests, meta-regression, Poisson regression, confounding and interaction, and survival analyses using log tests and segmented time-dependent Cox regression. Methods for assessing non linear models, data seasonality, distribution free methods, including Monte Carlo methods and artificial intelligence, and robust tests are also covered. Each method of testing is explained using a data example from clinical practice, including every step in SPSS, and a text with interpretations of the results and hints convenient for data reporting. In order to facilitate the use of this cookbook the data files of the examples is made available by the editor through extras.springer.com. Both part 1 and 2 of this title contain a minima amount of text and maximal technical details, but we believe that this property will not refrain students from mastering the SPSS software systematics, and that, instead, it will be a help to that aim. Yet, we recommend that it will used together with the textbook "Statistics Applied to Clinical Trials" (5th edition, Springer, Dordrecht 2012) and the e-books "Statistics on a Pocket Calculator Part 1 and 2 (Springer, Dordrecht, 2011 and 2012) from the same authors.
Stata is the most flexible and extensible data analysis package available from a commercial vendor. R is a similarly flexible free and open source package for data analysis, with over 3,000 add-on packages available. This book shows you how to extend the power of Stata through the use of R. It introduces R using Stata terminology with which you are already familiar. It steps through more than 30 programs written in both languages, comparing and contrasting the two packages' different approaches. When finished, you will be able to use R in conjunction with Stata, or separately, to import data, manage and transform it, create publication quality graphics, and perform basic statistical analyses. A glossary defines over 50 R terms using Stata jargon and again using more formal R terminology. The table of contents and index allow you to find equivalent R functions by looking up Stata commands and vice versa. The example programs and practice datasets for both R and Stata are available for download.
Bayesian statistical methods have become widely used for data analysis and modelling in recent years, and the BUGS software has become the most popular software for Bayesian analysis worldwide. Authored by the team that originally developed this software, The BUGS Book provides a practical introduction to this program and its use. The text presents complete coverage of all the functionalities of BUGS, including prediction, missing data, model criticism, and prior sensitivity. It also features a large number of worked examples and a wide range of applications from various disciplines. The book introduces regression models, techniques for criticism and comparison, and a wide range of modelling issues before going into the vital area of hierarchical models, one of the most common applications of Bayesian methods. It deals with essentials of modelling without getting bogged down in complexity. The book emphasises model criticism, model comparison, sensitivity analysis to alternative priors, and thoughtful choice of prior distributions-all those aspects of the "art" of modelling that are easily overlooked in more theoretical expositions. More pragmatic than ideological, the authors systematically work through the large range of "tricks" that reveal the real power of the BUGS software, for example, dealing with missing data, censoring, grouped data, prediction, ranking, parameter constraints, and so on. Many of the examples are biostatistical, but they do not require domain knowledge and are generalisable to a wide range of other application areas. Full code and data for examples, exercises, and some solutions can be found on the book's website.
Metrics are a hot topic. Executive leadership, boards of directors, management, and customers are all asking for data-based decisions. As a result, many managers, professionals, and change agents are asked to develop metrics, but have no clear idea of how to produce meaningful ones. Wouldn't it be great to have a simple explanation of how to collect, analyze, report, and use measurements to improve your organization? Metrics: How to Improve Key Business Results provides that explanation and the tools you'll need to make your organization more effective. Not only does the book explain the why of metrics, but it walks you through a step-by-step process for creating a report card that provides a clear picture of organizational health and how well you satisfy customer needs. Metrics will help you to measure the right things, the right way - the first time. No wasted effort, no chasing data. The report card provides a simple tool for viewing the health of your organization, from the outside in.You will learn how to measure the key components of the report card and thereby improve real measures of business success, like repeat customers, customer loyalty, and word-of-mouth advertising.This book: * Provides a step-by-step guide for building an organizational effectiveness report card * Takes you from identifying key services and products and using metrics, to determining business strategy * Provides examples of how to identify, collect, analyze, and report metrics that will be immediately useful for improving all aspects of the enterprise, including IT What you'll learn * Understand the difference between data, measures, information, and metrics * Identify root performance questions to ensure you build the right metrics * Develop meaningful and accurate metrics using concrete, easy-to-follow instructions * Avoid the high risks that come with collecting, analyzing, reporting, and using complex data * Formulate practical answers to data-based questions * Select and use the proper tools for creating, implementing, and using metrics * Learn one of the most powerful methods yet invented for improving organizational results Who this book is for Metrics: How to Improve Key Business Results was written for senior managers who need to improve key results.Equally, the book is for the department heads, middle managers, analysts, IT professionals, and change agents responsible for collecting, analyzing, and reporting metrics. Finally, it's for those who have to chase data and find meaningful answers to the interesting questions executives ponder. Table of Contents * Introduction: Who, What, Where, When, Why, and How You Use Metrics * Establishing a Common Language * Where to Begin: Planning a Good Metric * Using Metrics as Indicators * Using the Answer Key * Start with Effectiveness * Triangulation: Essential to Creating Effective Metrics * Expectations: How to View Data in a Meaningful Way * Creating and Interpreting the Metrics Report Card * Final Product: the Metrics Report Card * Employing Advanced Metrics * Creating the Service Catalog * Establishing Standards and Benchmarks * Respecting the Power of Metrics * Avoiding the Research Trap * Embracing Your Organization's Uniqueness * Appendix: Metrics Tools to Use and Useful Resources
Computational inference is based on an approach to statistical methods that uses modern computational power to simulate distributional properties of estimators and test statistics. This book describes computationally intensive statistical methods in a unified presentation, emphasizing techniques, such as the PDF decomposition, that arise in a wide range of methods.
This open access book contains review papers authored by thirteen plenary invited speakers to the 9th International Congress on Industrial and Applied Mathematics (Valencia, July 15-19, 2019). Written by top-level scientists recognized worldwide, the scientific contributions cover a wide range of cutting-edge topics of industrial and applied mathematics: mathematical modeling, industrial and environmental mathematics, mathematical biology and medicine, reduced-order modeling and cryptography. The book also includes an introductory chapter summarizing the main features of the congress. This is the first volume of a thematic series dedicated to research results presented at ICIAM 2019-Valencia Congress.
Data mining is the art and science of intelligent data analysis. By building knowledge from information, data mining adds considerable value to the ever increasing stores of electronic data that abound today. In performing data mining many decisions need to be made regarding the choice of methodology, the choice of data, the choice of tools, and the choice of algorithms. Throughout this book the reader is introduced to the basic concepts and some of the more popular algorithms of data mining. With a focus on the hands-on end-to-end process for data mining, Williams guides the reader through various capabilities of the easy to use, free, and open source Rattle Data Mining Software built on the sophisticated R Statistical Software. The focus on doing data mining rather than just reading about data mining is refreshing. The book covers data understanding, data preparation, data refinement, model building, model evaluation, and practical deployment. The reader will learn to rapidly deliver a data mining project using software easily installed for free from the Internet. Coupling Rattle with R delivers a very sophisticated data mining environment with all the power, and more, of the many commercial offerings.
Advanced R helps you understand how R works at a fundamental level. It is designed for R programmers who want to deepen their understanding of the language, and programmers experienced in other languages who want to understand what makes R different and special. This book will teach you the foundations of R; three fundamental programming paradigms (functional, object-oriented, and metaprogramming); and powerful techniques for debugging and optimising your code. By reading this book, you will learn: The difference between an object and its name, and why the distinction is important The important vector data structures, how they fit together, and how you can pull them apart using subsetting The fine details of functions and environments The condition system, which powers messages, warnings, and errors The powerful functional programming paradigm, which can replace many for loops The three most important OO systems: S3, S4, and R6 The tidy eval toolkit for metaprogramming, which allows you to manipulate code and control evaluation Effective debugging techniques that you can deploy, regardless of how your code is run How to find and remove performance bottlenecks The second edition is a comprehensive update: New foundational chapters: "Names and values," "Control flow," and "Conditions" comprehensive coverage of object oriented programming with chapters on S3, S4, R6, and how to choose between them Much deeper coverage of metaprogramming, including the new tidy evaluation framework use of new package like rlang (http://rlang.r-lib.org), which provides a clean interface to low-level operations, and purr (http://purrr.tidyverse.org/) for functional programming Use of color in code chunks and figures Hadley Wickham is Chief Scientist at RStudio, an Adjunct Professor at Stanford University and the University of Auckland, and a member of the R Foundation. He is the lead developer of the tidyverse, a collection of R packages, including ggplot2 and dplyr, designed to support data science. He is also the author of R for Data Science (with Garrett Grolemund), R Packages, and ggplot2: Elegant Graphics for Data Analysis.
The investigation of the role of mechanical and mechano-chemical interactions in cellular processes and tissue development is a rapidly growing research field in the life sciences and in biomedical engineering. Quantitative understanding of this important area in the study of biological systems requires the development of adequate mathematical models for the simulation of the evolution of these systems in space and time. Since expertise in various fields is necessary, this calls for a multidisciplinary approach. This edited volume connects basic physical, biological, and physiological concepts to methods for the mathematical modeling of various materials by pursuing a multiscale approach, from subcellular to organ and system level. Written by active researchers, each chapter provides a detailed introduction to a given field, illustrates various approaches to creating models, and explores recent advances and future research perspectives. Topics covered include molecular dynamics simulations of lipid membranes, phenomenological continuum mechanics of tissue growth, and translational cardiovascular modeling. Modeling Biomaterials will be a valuable resource for both non-specialists and experienced researchers from various domains of science, such as applied mathematics, biophysics, computational physiology, and medicine.
|
You may like...
High Performance Computing on Vector…
Sabine Roller, Katharina Benkert, …
Hardcover
R2,672
Discovery Miles 26 720
Biological Signals Classification and…
Kamran Kiasaleh
Hardcover
French and English Philosophers…
Jean Jacques Rousseau, Voltaire, …
Hardcover
R984
Discovery Miles 9 840
|