![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
Grimmett, Geoffrey: Percolation and disordered systems.- Kesten, Harry: Aspects of first passage percolation. "
The intensive use of automatic data acquisition system and the use of cloud computing for process monitoring have led to an increased occurrence of industrial processes that utilize statistical process control and capability analysis. These analyses are performed almost exclusively with multivariate methodologies. The aim of this Brief is to present the most important MSQC techniques developed in R language. The book is divided into two parts. The first part contains the basic R elements, an introduction to statistical procedures, and the main aspects related to Statistical Quality Control (SQC). The second part covers the construction of multivariate control charts, the calculation of Multivariate Capability Indices.
This Handbook gives a comprehensive snapshot of a field at the intersection of mathematics and computer science with applications in physics, engineering and education. Reviews 67 software systems and offers 100 pages on applications in physics, mathematics, computer science, engineering chemistry and education.
Automatic Graph Drawing is concerned with the layout of relational structures as they occur in Computer Science (Data Base Design, Data Mining, Web Mining), Bioinformatics (Metabolic Networks), Businessinformatics (Organization Diagrams, Event Driven Process Chains), or the Social Sciences (Social Networks). In mathematical terms, such relational structures are modeled as graphs or more general objects such as hypergraphs, clustered graphs, or compound graphs. A variety of layout algorithms that are based on graph theoretical foundations have been developed in the last two decades and implemented in software systems. After an introduction to the subject area and a concise treatment of the technical foundations for the subsequent chapters, this book features 14 chapters on state-of-the-art graph drawing software systems, ranging from general "tool boxes'' to customized software for various applications. These chapters are written by leading experts, they follow a uniform scheme and can be read independently from each other.
Every advance in computer architecture and software tempts statisticians to tackle numerically harder problems. To do so intelligently requires a good working knowledge of numerical analysis. This book equips students to craft their own software and to understand the advantages and disadvantages of different numerical methods. Issues of numerical stability, accurate approximation, computational complexity, and mathematical modeling share the limelight in a broad yet rigorous overview of those parts of numerical analysis most relevant to statisticians. In this second edition, the material on optimization has been completely rewritten. There is now an entire chapter on the MM algorithm in addition to more comprehensive treatments of constrained optimization, penalty and barrier methods, and model selection via the lasso. There is also new material on the Cholesky decomposition, Gram-Schmidt orthogonalization, the QR decomposition, the singular value decomposition, and reproducing kernel Hilbert spaces. The discussions of the bootstrap, permutation testing, independent Monte Carlo, and hidden Markov chains are updated, and a new chapter on advanced MCMC topics introduces students to Markov random fields, reversible jump MCMC, and convergence analysis in Gibbs sampling. Numerical Analysis for Statisticians can serve as a graduate text for a course surveying computational statistics. With a careful selection of topics and appropriate supplementation, it can be used at the undergraduate level. It contains enough material for a graduate course on optimization theory. Because many chapters are nearly self-contained, professional statisticians will also find the book useful as a reference.
Biological and biomedical studies have entered a new era over the past two decades thanks to the wide use of mathematical models and computational approaches. A booming of computational biology, which sheerly was a theoretician's fantasy twenty years ago, has become a reality. Obsession with computational biology and theoretical approaches is evidenced in articles hailing the arrival of what are va- ously called quantitative biology, bioinformatics, theoretical biology, and systems biology. New technologies and data resources in genetics, such as the International HapMap project, enable large-scale studies, such as genome-wide association st- ies, which could potentially identify most common genetic variants as well as rare variants of the human DNA that may alter individual's susceptibility to disease and the response to medical treatment. Meanwhile the multi-electrode recording from behaving animals makes it feasible to control the animal mental activity, which could potentially lead to the development of useful brain-machine interfaces. - bracing the sheer volume of genetic, genomic, and other type of data, an essential approach is, ?rst of all, to avoid drowning the true signal in the data. It has been witnessed that theoretical approach to biology has emerged as a powerful and st- ulating research paradigm in biological studies, which in turn leads to a new - search paradigm in mathematics, physics, and computer science and moves forward with the interplays among experimental studies and outcomes, simulation studies, and theoretical investigations.
Hurricanes are nature's most destructive storms and they are becoming more powerful as the globe warms. Hurricane Climatology explains how to analyze and model hurricane data to better understand and predict present and future hurricane activity. It uses the open-source and now widely used R software for statistical computing to create a tutorial-style manual for independent study, review, and reference. The text is written around the code that when copied will reproduce the graphs, tables, and maps. The approach is different from other books that use R. It focuses on a single topic and explains how to make use of R to better understand the topic. The book is organized into two parts, the first of which provides material on software, statistics, and data. The second part presents methods and models used in hurricane climate research.
Bayesian Networks in R with Applications in Systems Biology is unique as it introduces the reader to the essential concepts in Bayesian network modeling and inference in conjunction with examples in the open-source statistical environment R. The level of sophistication is also gradually increased across the chapters with exercises and solutions for enhanced understanding for hands-on experimentation of the theory and concepts. The application focuses on systems biology with emphasis on modeling pathways and signaling mechanisms from high-throughput molecular data. Bayesian networks have proven to be especially useful abstractions in this regard. Their usefulness is especially exemplified by their ability to discover new associations in addition to validating known ones across the molecules of interest. It is also expected that the prevalence of publicly available high-throughput biological data sets may encourage the audience to explore investigating novel paradigms using the approaches presented in the book.
Accurate and efficient computer algorithms for factoring matrices, solving linear systems of equations, and extracting eigenvalues and eigenvectors. Regardless of the software system used, the book describes and gives examples of the use of modern computer software for numerical linear algebra. It begins with a discussion of the basics of numerical computations, and then describes the relevant properties of matrix inverses, factorisations, matrix and vector norms, and other topics in linear algebra. The book is essentially self- contained, with the topics addressed constituting the essential material for an introductory course in statistical computing. Numerous exercises allow the text to be used for a first course in statistical computing or as supplementary text for various courses that emphasise computations.
Molchanov, S.: Lectures on random media.- Zeitouni, Ofer: Random walks in random environment.-den Hollander, Frank: Random polymers "
CUDA is now the dominant language used for programming GPUs, one of the most exciting hardware developments of recent decades. With CUDA, you can use a desktop PC for work that would have previously required a large cluster of PCs or access to a HPC facility. As a result, CUDA is increasingly important in scientific and technical computing across the whole STEM community, from medical physics and financial modelling to big data applications and beyond. This unique book on CUDA draws on the author's passion for and long experience of developing and using computers to acquire and analyse scientific data. The result is an innovative text featuring a much richer set of examples than found in any other comparable book on GPU computing. Much attention has been paid to the C++ coding style, which is compact, elegant and efficient. A code base of examples and supporting material is available online, which readers can build on for their own projects.
Many of the commonly used methods for modeling and fitting psychophysical data are special cases of statistical procedures of great power and generality, notably the Generalized Linear Model (GLM). This book illustrates how to fit data from a variety of psychophysical paradigms using modern statistical methods and the statistical language R.The paradigms include signal detection theory, psychometric function fitting, classification images and more. In two chapters, recently developed methods for scaling appearance, maximum likelihood difference scaling and maximum likelihood conjoint measurement are examined.The authors also consider the applicationof mixed-effects models to psychophysical data. R is an open-source programming language that is widely used by statisticians and is seeing enormous growth in its application to data in all fields. It is interactive, containing many powerful facilities for optimization, model evaluation, model selection, and graphical display of data. The reader who fits data in R can readily make use of these methods. The researcher who uses R to fit and model his data has access to most recently developed statistical methods. This book does not assume that the reader is familiar with R,
and a little experience with any programming language is all that
is needed to appreciate this book. There are large numbers of
examples of R in the text and the source code for all examples is
available in an R package MPDiR available through R. Laurence T. Maloney is Professor of Psychology and Neural Science at New York University. His research focusses on applications of mathematical models to perception, motor control and decision making."
Algorithms for Computer Algebra is the first comprehensive textbook to be published on the topic of computational symbolic mathematics. The book first develops the foundational material from modern algebra that is required for subsequent topics. It then presents a thorough development of modern computational algorithms for such problems as multivariate polynomial arithmetic and greatest common divisor calculations, factorization of multivariate polynomials, symbolic solution of linear and polynomial systems of equations, and analytic integration of elementary functions. Numerous examples are integrated into the text as an aid to understanding the mathematical development. The algorithms developed for each topic are presented in a Pascal-like computer language. An extensive set of exercises is presented at the end of each chapter. Algorithms for Computer Algebra is suitable for use as a textbook for a course on algebraic algorithms at the third-year, fourth-year, or graduate level. Although the mathematical development uses concepts from modern algebra, the book is self-contained in the sense that a one-term undergraduate course introducing students to rings and fields is the only prerequisite assumed. The book also serves well as a supplementary textbook for a traditional modern algebra course, by presenting concrete applications to motivate the understanding of the theory of rings and fields.
Stata is the most flexible and extensible data analysis package available from a commercial vendor. R is a similarly flexible free and open source package for data analysis, with over 3,000 add-on packages available. This book shows you how to extend the power of Stata through the use of R. It introduces R using Stata terminology with which you are already familiar. It steps through more than 30 programs written in both languages, comparing and contrasting the two packages' different approaches. When finished, you will be able to use R in conjunction with Stata, or separately, to import data, manage and transform it, create publication quality graphics, and perform basic statistical analyses. A glossary defines over 50 R terms using Stata jargon and again using more formal R terminology. The table of contents and index allow you to find equivalent R functions by looking up Stata commands and vice versa. The example programs and practice datasets for both R and Stata are available for download.
Understanding Statistics in Psychology with SPSS, eighth edition, offers students a trusted, straightforward, and engaging way of learning to do statistical analyses confidently using SPSS. Comprehensive and practical, the text is organised into short accessible chapters, making it the ideal text for undergraduate psychology students needing to get to grips with statistics in class or independently. Clear diagrams and full colour screenshots from SPSS make the text suitable for beginners while the broad coverage of topics ensures that students can continue to use it as they progress to more advanced techniques. Key features * Combines coverage of statistics with full guidance on how to use SPSS to analyse data. * Suitable for use with all versions of SPSS. * Examples from a wide range of real psychological studies illustrate how statistical techniques are used in practice. * Includes clear and detailed guidance on choosing tests, interpreting findings and reporting and writing up research. * Student-focused pedagogical approach including: o Key concept boxes detailing important terms. o Focus on sections exploring complex topics in greater depth. o Explaining statistics sections clarify important statistical concepts. . Dennis Howitt and Duncan Cramer are with Loughborough University.
Statistics is of ever-increasing importance in Science and Technology and this book presents the essentials of the subject in a form suitable either as the basis of a course of lectures or to be read and/or used on its own. It assumes very little in the way of mathematical knowledge-just the ability to substitute numerically in a few simple formulae. However, some mathematical proofs are outlined or given in full to illustrate the derivation of the subject; these can be omitted without loss of understanding. The book does aim at making clear the scope and nature of those essential tests and methods that a scientist or technologist is likely to need; to this end each chapter has been divided into sections with their own subheadings and some effort has been made to make the text unambiguous (if any reader finds a misleading point anywhere I hope he will write to me about it). Also with this aim in view, the equality of probability to proportion of population is stated early, then the normal distribution and the taking of samples is discussed. This occupies the first five chapters. With the principles of these chapters understood, the student can immediately learn the significance tests of Chapter 6 and, if he needs it, the analysis of variance of Chapter 7. For some scientists this will be most of what they need. Howcver, they will be in a position to read and/or use the remaining chapters without undue difficulty.
Today, certain computer software systems exist which surpass the computational ability of researchers when their mathematical techniques are applied to many areas of science and engineering. These computer systems can perform a large portion of the calculations seen in mathematical analysis. Despite this massive power, thousands of people use these systems as a routine resource for everyday calculations. These software programs are commonly called "Computer Algebra" systems. They have names such as MACSYMA, MAPLE, muMATH, REDUCE and SMP. They are receiving credit as a computational aid with in creasing regularity in articles in the scientific and engineering literature. When most people think about computers and scientific research these days, they imagine a machine grinding away, processing numbers arithmetically. It is not generally realized that, for a number of years, computers have been performing non-numeric computations. This means, for example, that one inputs an equa tion and obtains a closed form analytic answer. It is these Computer Algebra systems, their capabilities, and applications which are the subject of the papers in this volume."
Biological systems are extremely complex and have emergent properties that cannot be explained or even predicted by studying their individual parts in isolation. The reductionist approach, although successful in the early days of molecular biology, underestimates this complexity. As the amount of available data grows, so it will become increasingly important to be able to analyse and integrate these large data sets. This book introduces novel approaches and solutions to the Big Data problem in biomedicine, and presents new techniques in the field of graph theory for handling and processing multi-type large data sets. By discussing cutting-edge problems and techniques, researchers from a wide range of fields will be able to gain insights for exploiting big heterogonous data in the life sciences through the concept of 'network of networks'.
This book helps readers understand the mathematics of machine learning, and apply them in different situations. It is divided into two basic parts, the first of which introduces readers to the theory of linear algebra, probability, and data distributions and it's applications to machine learning. It also includes a detailed introduction to the concepts and constraints of machine learning and what is involved in designing a learning algorithm. This part helps readers understand the mathematical and statistical aspects of machine learning. In turn, the second part discusses the algorithms used in supervised and unsupervised learning. It works out each learning algorithm mathematically and encodes it in R to produce customized learning applications. In the process, it touches upon the specifics of each algorithm and the science behind its formulation. The book includes a wealth of worked-out examples along with R codes. It explains the code for each algorithm, and readers can modify the code to suit their own needs. The book will be of interest to all researchers who intend to use R for machine learning, and those who are interested in the practical aspects of implementing learning algorithms for data analysis. Further, it will be particularly useful and informative for anyone who has struggled to relate the concepts of mathematics and statistics to machine learning.
A t the terminal seated, the answering tone: pond and temple bell. ODAY as in the past, statistical method is profoundly affected by T resources for numerical calculation and visual display. The main line of development of statistical methodology during the first half of this century was conditioned by, and attuned to, the mechanical desk calculator. Now statisticians may use electronic computers of various kinds in various modes, and the character of statistical science has changed accordingly. Some, but not all, modes of modern computation have a flexibility and immediacy reminiscent of the desk calculator. They preserve the virtues of the desk calculator, while immensely exceeding its scope. Prominent among these is the computer language and conversational computing system known by the initials APL. This book is addressed to statisticians. Its first aim is to interest them in using APL in their work-for statistical analysis of data, for numerical support of theoretical studies, for simulation of random processes. In Part A the language is described and illustrated with short examples of statistical calculations. Part B, presenting some more extended examples of statistical analysis of data, has also the further aim of suggesting the interplay of computing and theory that must surely henceforth be typical of the develop ment of statistical science."
S is a high-level language for manipulating, analysing and displaying data. It forms the basis of two highly acclaimed and widely used data analysis software systems, the commercial S-PLUS(r) and the Open Source R. This book provides an in-depth guide to writing software in the S language under either or both of those systems. It is intended for readers who have some acquaintance with the S language and want to know how to use it more effectively, for example to build re-usable tools for streamlining routine data analysis or to implement new statistical methods. One of the outstanding strengths of the S language is the ease with which it can be extended by users. S is a functional language, and functions written by users are first-class objects treated in the same way as functions provided by the system. S code is eminently readable and so a good way to document precisely what algorithms were used, and as much of the implementations are themselves written in S, they can be studied as models and to understand their subtleties. The current implementations also provide easy ways for S functions to call compiled code written in C, Fortran and similar languages; this is documented here in depth. Increasingly S is being used for statistical or graphical analysis within larger software systems or for whole vertical-market applications. The interface facilities are most developed on Windows(r) and these are covered with worked examples. The authors have written the widely used Modern Applied Statistics with S-PLUS, now in its third edition, and several software libraries that enhance S-PLUS and R; these and the examples used in both books are available on the Internet. Dr. W.N. Venables is a senior Statistician with the CSIRO/CMIS Environmetrics Project in Australia, having been at the Department of Statistics, University of Adelaide for many years previously. Professor B.D. Ripley holds the Chair of Applied Statistics at the University of Oxford, and is the author of four other books on spatial statistics, simulation, pattern recognition and neural networks. Both authors are known and respected throughout the international S and R communities, for their books, workshops, short courses, freely available software and through their extensive contributions to the S-news and R mailing lists.
Keith M. Ponting Speech Research Unit, DERA Malvern St. Andrew's Road, Great Malvern, Worcs. WR14 3PS, UK email: ponting
Metrics are a hot topic. Executive leadership, boards of directors, management, and customers are all asking for data-based decisions. As a result, many managers, professionals, and change agents are asked to develop metrics, but have no clear idea of how to produce meaningful ones. Wouldn't it be great to have a simple explanation of how to collect, analyze, report, and use measurements to improve your organization? Metrics: How to Improve Key Business Results provides that explanation and the tools you'll need to make your organization more effective. Not only does the book explain the why of metrics, but it walks you through a step-by-step process for creating a report card that provides a clear picture of organizational health and how well you satisfy customer needs. Metrics will help you to measure the right things, the right way - the first time. No wasted effort, no chasing data. The report card provides a simple tool for viewing the health of your organization, from the outside in.You will learn how to measure the key components of the report card and thereby improve real measures of business success, like repeat customers, customer loyalty, and word-of-mouth advertising.This book: * Provides a step-by-step guide for building an organizational effectiveness report card * Takes you from identifying key services and products and using metrics, to determining business strategy * Provides examples of how to identify, collect, analyze, and report metrics that will be immediately useful for improving all aspects of the enterprise, including IT What you'll learn * Understand the difference between data, measures, information, and metrics * Identify root performance questions to ensure you build the right metrics * Develop meaningful and accurate metrics using concrete, easy-to-follow instructions * Avoid the high risks that come with collecting, analyzing, reporting, and using complex data * Formulate practical answers to data-based questions * Select and use the proper tools for creating, implementing, and using metrics * Learn one of the most powerful methods yet invented for improving organizational results Who this book is for Metrics: How to Improve Key Business Results was written for senior managers who need to improve key results.Equally, the book is for the department heads, middle managers, analysts, IT professionals, and change agents responsible for collecting, analyzing, and reporting metrics. Finally, it's for those who have to chase data and find meaningful answers to the interesting questions executives ponder. Table of Contents * Introduction: Who, What, Where, When, Why, and How You Use Metrics * Establishing a Common Language * Where to Begin: Planning a Good Metric * Using Metrics as Indicators * Using the Answer Key * Start with Effectiveness * Triangulation: Essential to Creating Effective Metrics * Expectations: How to View Data in a Meaningful Way * Creating and Interpreting the Metrics Report Card * Final Product: the Metrics Report Card * Employing Advanced Metrics * Creating the Service Catalog * Establishing Standards and Benchmarks * Respecting the Power of Metrics * Avoiding the Research Trap * Embracing Your Organization's Uniqueness * Appendix: Metrics Tools to Use and Useful Resources
Computer-Aided Control Systems Design: Practical Applications Using MATLAB (R) and Simulink (R) supplies a solid foundation in applied control to help you bridge the gap between control theory and its real-world applications. Working from basic principles, the book delves into control systems design through the practical examples of the ALSTOM gasifier system in power stations and underwater robotic vehicles in the marine industry. It also shows how powerful software such as MATLAB (R) and Simulink (R) can aid in control systems design. Make Control Engineering Come Alive with Computer-Aided Software Emphasizing key aspects of the design process, the book covers the dynamic modeling, control structure design, controller design, implementation, and testing of control systems. It begins with the essential ideas of applied control engineering and a hands-on introduction to MATLAB and Simulink. It then discusses the analysis, model order reduction, and controller design for a power plant and the modeling, simulation, and control of a remotely operated vehicle (ROV) for pipeline tracking. The author explains how to obtain the ROV model and verify it by using computational fluid dynamic software before designing and implementing the control system. In addition, the book details the nonlinear subsystem modeling and linearization of the ROV at vertical plane equilibrium points. Throughout, the author delineates areas for further study. Appendices provide additional information on various simulation models and their results. Learn How to Perform Simulations on Real Industry Systems A step-by-step guide to computer-aided applied control design, this book supplies the knowledge to help you deal with control problems in industry. It is a valuable reference for anyone who wants a better understanding of the theory and practice of basic control systems design, analysis, and implementation.
Mathematics plays an important role in many scientific and engineering disciplines. This book deals with the numerical solution of differential equations, a very important branch of mathematics. Our aim is to give a practical and theoretical account of how to solve a large variety of differential equations, comprising ordinary differential equations, initial value problems and boundary value problems, differential algebraic equations, partial differential equations and delay differential equations. The solution of differential equations using R is the main focus of this book. It is therefore intended for the practitioner, the student and the scientist, who wants to know how to use R for solving differential equations. However, it has been our goal that non-mathematicians should at least understand the basics of the methods, while obtaining entrance into the relevant literature that provides more mathematical background. Therefore, each chapter that deals with R examples is preceded by a chapter where the theory behind the numerical methods being used is introduced. In the sections that deal with the use of R for solving differential equations, we have taken examples from a variety of disciplines, including biology, chemistry, physics, pharmacokinetics. Many examples are well-known test examples, used frequently in the field of numerical analysis. |
![]() ![]() You may like...
Mathematical Modeling for Smart…
Debabrata Samanta, Debabrata Singh
Hardcover
R12,704
Discovery Miles 127 040
Essential Java for Scientists and…
Brian Hahn, Katherine Malan
Paperback
R1,296
Discovery Miles 12 960
An Introduction to Creating Standardized…
Todd Case, Yuting Tian
Hardcover
R1,643
Discovery Miles 16 430
Jump into JMP Scripting, Second Edition…
Wendy Murphrey, Rosemary Lucas
Hardcover
R1,649
Discovery Miles 16 490
|