![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
"Functional and Phylogenetic Ecology in R" is designed to teach readers to use R for phylogenetic and functional trait analyses. Over the past decade, a dizzying array of tools and methods were generated to incorporate phylogenetic and functional information into traditional ecological analyses. Increasingly these tools are implemented in R, thus greatly expanding their impact. Researchers getting started in R can use this volume as a step-by-step entryway into phylogenetic and functional analyses for ecology in R. More advanced users will be able to use this volume as a quick reference to understand particular analyses. The volume begins with an introduction to the R environment and handling relevant data in R. Chapters then cover phylogenetic and functional metrics of biodiversity; null modeling and randomizations for phylogenetic and functional trait analyses; integrating phylogenetic and functional trait information; and interfacing the R environment with a popular C-based program. This book presents a unique approach through its focus on ecological analyses and not macroevolutionary analyses. The author provides his own code, so that the reader is guided through the computational steps to calculate the desired metrics. This guided approach simplifies the work of determining which package to use for any given analysis. Example datasets are shared to help readers practice, and readers can then quickly turn to their own datasets.
Intended for both researchers and practitioners, this book will be a valuable resource for studying and applying recent robust statistical methods. It contains up-to-date research results in the theory of robust statistics Treats computational aspects and algorithms and shows interesting and new applications.
Looking back at the years that have passed since the realization of the very first electronic, multi-purpose computers, one observes a tremendous growth in hardware and software performance. Today, researchers and engi neers have access to computing power and software that can solve numerical problems which are not fully understood in terms of existing mathemati cal theory. Thus, computational sciences must in many respects be viewed as experimental disciplines. As a consequence, there is a demand for high quality, flexible software that allows, and even encourages, experimentation with alternative numerical strategies and mathematical models. Extensibil ity is then a key issue; the software must provide an efficient environment for incorporation of new methods and models that will be required in fu ture problem scenarios. The development of such kind of flexible software is a challenging and expensive task. One way to achieve these goals is to in vest much work in the design and implementation of generic software tools which can be used in a wide range of application fields. In order to provide a forum where researchers could present and discuss their contributions to the described development, an International Work shop on Modern Software Tools for Scientific Computing was arranged in Oslo, Norway, September 16-18, 1996. This workshop, informally referred to as Sci Tools '96, was a collaboration between SINTEF Applied Mathe matics and the Departments of Informatics and Mathematics at the Uni versity of Oslo."
Separation of signal from noise is the most fundamental problem in data analysis, arising in such fields as: signal processing, econometrics, actuarial science, and geostatistics. This book introduces the local regression method in univariate and multivariate settings, with extensions to local likelihood and density estimation. Practical information is also included on how to implement these methods in the programs S-PLUS and LOCFIT.
The advent of fast and sophisticated computer graphics has brought dynamic and interactive images under the control of professional mathematicians and mathematics teachers. This volume in the NATO Special Programme on Advanced Educational Technology takes a comprehensive and critical look at how the computer can support the use of visual images in mathematical problem solving. The contributions are written by researchers and teachers from a variety of disciplines including computer science, mathematics, mathematics education, psychology, and design. Some focus on the use of external visual images and others on the development of individual mental imagery. The book is the first collected volume in a research area that is developing rapidly, and the authors pose some challenging new questions.
A unique textbook for an undergraduate course on mathematical modeling, Differential Equations with MATLAB: Exploration, Applications, and Theory provides students with an understanding of the practical and theoretical aspects of mathematical models involving ordinary and partial differential equations (ODEs and PDEs). The text presents a unifying picture inherent to the study and analysis of more than 20 distinct models spanning disciplines such as physics, engineering, and finance. The first part of the book presents systems of linear ODEs. The text develops mathematical models from ten disparate fields, including pharmacokinetics, chemistry, classical mechanics, neural networks, physiology, and electrical circuits. Focusing on linear PDEs, the second part covers PDEs that arise in the mathematical modeling of phenomena in ten other areas, including heat conduction, wave propagation, fluid flow through fissured rocks, pattern formation, and financial mathematics. The authors engage students by posing questions of all types throughout, including verifying details, proving conjectures of actual results, analyzing broad strokes that occur within the development of the theory, and applying the theory to specific models. The authors' accessible style encourages students to actively work through the material and answer these questions. In addition, the extensive use of MATLAB (R) GUIs allows students to discover patterns and make conjectures.
Dealing with methods for sampling from posterior distributions and how to compute posterior quantities of interest using Markov chain Monte Carlo (MCMC) samples, this book addresses such topics as improving simulation accuracy, marginal posterior density estimation, estimation of normalizing constants, constrained parameter problems, highest posterior density interval calculations, computation of posterior modes, and posterior computations for proportional hazards models and Dirichlet process models. The authors also discuss model comparisons, including both nested and non-nested models, marginal likelihood methods, ratios of normalizing constants, Bayes factors, the Savage-Dickey density ratio, Stochastic Search Variable Selection, Bayesian Model Averaging, the reverse jump algorithm, and model adequacy using predictive and latent residual approaches. The book presents an equal mixture of theory and applications involving real data, and is intended as a graduate textbook or a reference book for a one-semester course at the advanced masters or Ph.D. level. It will also serve as a useful reference for applied or theoretical researchers as well as practitioners.
The intensive use of automatic data acquisition system and the use of cloud computing for process monitoring have led to an increased occurrence of industrial processes that utilize statistical process control and capability analysis. These analyses are performed almost exclusively with multivariate methodologies. The aim of this Brief is to present the most important MSQC techniques developed in R language. The book is divided into two parts. The first part contains the basic R elements, an introduction to statistical procedures, and the main aspects related to Statistical Quality Control (SQC). The second part covers the construction of multivariate control charts, the calculation of Multivariate Capability Indices.
Every advance in computer architecture and software tempts statisticians to tackle numerically harder problems. To do so intelligently requires a good working knowledge of numerical analysis. This book equips students to craft their own software and to understand the advantages and disadvantages of different numerical methods. Issues of numerical stability, accurate approximation, computational complexity, and mathematical modeling share the limelight in a broad yet rigorous overview of those parts of numerical analysis most relevant to statisticians. In this second edition, the material on optimization has been completely rewritten. There is now an entire chapter on the MM algorithm in addition to more comprehensive treatments of constrained optimization, penalty and barrier methods, and model selection via the lasso. There is also new material on the Cholesky decomposition, Gram-Schmidt orthogonalization, the QR decomposition, the singular value decomposition, and reproducing kernel Hilbert spaces. The discussions of the bootstrap, permutation testing, independent Monte Carlo, and hidden Markov chains are updated, and a new chapter on advanced MCMC topics introduces students to Markov random fields, reversible jump MCMC, and convergence analysis in Gibbs sampling. Numerical Analysis for Statisticians can serve as a graduate text for a course surveying computational statistics. With a careful selection of topics and appropriate supplementation, it can be used at the undergraduate level. It contains enough material for a graduate course on optimization theory. Because many chapters are nearly self-contained, professional statisticians will also find the book useful as a reference.
Biological and biomedical studies have entered a new era over the past two decades thanks to the wide use of mathematical models and computational approaches. A booming of computational biology, which sheerly was a theoretician's fantasy twenty years ago, has become a reality. Obsession with computational biology and theoretical approaches is evidenced in articles hailing the arrival of what are va- ously called quantitative biology, bioinformatics, theoretical biology, and systems biology. New technologies and data resources in genetics, such as the International HapMap project, enable large-scale studies, such as genome-wide association st- ies, which could potentially identify most common genetic variants as well as rare variants of the human DNA that may alter individual's susceptibility to disease and the response to medical treatment. Meanwhile the multi-electrode recording from behaving animals makes it feasible to control the animal mental activity, which could potentially lead to the development of useful brain-machine interfaces. - bracing the sheer volume of genetic, genomic, and other type of data, an essential approach is, ?rst of all, to avoid drowning the true signal in the data. It has been witnessed that theoretical approach to biology has emerged as a powerful and st- ulating research paradigm in biological studies, which in turn leads to a new - search paradigm in mathematics, physics, and computer science and moves forward with the interplays among experimental studies and outcomes, simulation studies, and theoretical investigations.
Accurate and efficient computer algorithms for factoring matrices, solving linear systems of equations, and extracting eigenvalues and eigenvectors. Regardless of the software system used, the book describes and gives examples of the use of modern computer software for numerical linear algebra. It begins with a discussion of the basics of numerical computations, and then describes the relevant properties of matrix inverses, factorisations, matrix and vector norms, and other topics in linear algebra. The book is essentially self- contained, with the topics addressed constituting the essential material for an introductory course in statistical computing. Numerous exercises allow the text to be used for a first course in statistical computing or as supplementary text for various courses that emphasise computations.
Many of the commonly used methods for modeling and fitting psychophysical data are special cases of statistical procedures of great power and generality, notably the Generalized Linear Model (GLM). This book illustrates how to fit data from a variety of psychophysical paradigms using modern statistical methods and the statistical language R.The paradigms include signal detection theory, psychometric function fitting, classification images and more. In two chapters, recently developed methods for scaling appearance, maximum likelihood difference scaling and maximum likelihood conjoint measurement are examined.The authors also consider the applicationof mixed-effects models to psychophysical data. R is an open-source programming language that is widely used by statisticians and is seeing enormous growth in its application to data in all fields. It is interactive, containing many powerful facilities for optimization, model evaluation, model selection, and graphical display of data. The reader who fits data in R can readily make use of these methods. The researcher who uses R to fit and model his data has access to most recently developed statistical methods. This book does not assume that the reader is familiar with R,
and a little experience with any programming language is all that
is needed to appreciate this book. There are large numbers of
examples of R in the text and the source code for all examples is
available in an R package MPDiR available through R. Laurence T. Maloney is Professor of Psychology and Neural Science at New York University. His research focusses on applications of mathematical models to perception, motor control and decision making."
Algorithms for Computer Algebra is the first comprehensive textbook to be published on the topic of computational symbolic mathematics. The book first develops the foundational material from modern algebra that is required for subsequent topics. It then presents a thorough development of modern computational algorithms for such problems as multivariate polynomial arithmetic and greatest common divisor calculations, factorization of multivariate polynomials, symbolic solution of linear and polynomial systems of equations, and analytic integration of elementary functions. Numerous examples are integrated into the text as an aid to understanding the mathematical development. The algorithms developed for each topic are presented in a Pascal-like computer language. An extensive set of exercises is presented at the end of each chapter. Algorithms for Computer Algebra is suitable for use as a textbook for a course on algebraic algorithms at the third-year, fourth-year, or graduate level. Although the mathematical development uses concepts from modern algebra, the book is self-contained in the sense that a one-term undergraduate course introducing students to rings and fields is the only prerequisite assumed. The book also serves well as a supplementary textbook for a traditional modern algebra course, by presenting concrete applications to motivate the understanding of the theory of rings and fields.
Bayesian Networks in R with Applications in Systems Biology is unique as it introduces the reader to the essential concepts in Bayesian network modeling and inference in conjunction with examples in the open-source statistical environment R. The level of sophistication is also gradually increased across the chapters with exercises and solutions for enhanced understanding for hands-on experimentation of the theory and concepts. The application focuses on systems biology with emphasis on modeling pathways and signaling mechanisms from high-throughput molecular data. Bayesian networks have proven to be especially useful abstractions in this regard. Their usefulness is especially exemplified by their ability to discover new associations in addition to validating known ones across the molecules of interest. It is also expected that the prevalence of publicly available high-throughput biological data sets may encourage the audience to explore investigating novel paradigms using the approaches presented in the book.
Recent advances in the understanding of star formation and evolution have been impressive and aspects of that knowledge are explored in this volume. The black hole stellar endpoints are studied and geodesic motion is explored. The emission of gravitational waves is featured due to their very recent experimental discovery.The second aspect of the text is space exploration which began 62 years ago with the Sputnik Earth satellite followed by the landing on the Moon just 50 years ago. Since then Mars has been explored remotely as well as flybys of the outer planets and probes which have escaped the solar system. The text explores many aspects of rocket travel. Finally possibilities for interstellar travel are discussed.All these topics are treated in a unified way using the Matlab App to combine text, figures, formulae and numeric input and output. In this way the reader may vary parameters and see the results in real time. That experience aids in building up an intuitive feel for the many specific problems given in this text.
Accessible to a general audience with some background in statistics and computing Many examples and extended case studies Illustrations using R and Rstudio A true blend of statistics and computer science -- not just a grab bag of topics from each
"MATLAB for Neuroscientists" serves as the only complete study manual and teaching resource for MATLAB, the globally accepted standard for scientific computing, in the neurosciences and psychology. This unique introduction can be used to learn the entire empirical and experimental process (including stimulus generation, experimental control, data collection, data analysis, modeling, and more), and the 2nd Edition continues to ensure that a wide variety of computational problems can be addressed in a single programming environment. This updated edition features additional material on the
creation of visual stimuli, advanced psychophysics, analysis of LFP
data, choice probabilities, synchrony, and advanced spectral
analysis. Users at a variety of levels-advanced undergraduates,
beginning graduate students, and researchers looking to modernize
their skills-will learn to design and implement their own
analytical tools, and gain the fluency required to meet the
computational needs of neuroscience practitioners.
Today, certain computer software systems exist which surpass the computational ability of researchers when their mathematical techniques are applied to many areas of science and engineering. These computer systems can perform a large portion of the calculations seen in mathematical analysis. Despite this massive power, thousands of people use these systems as a routine resource for everyday calculations. These software programs are commonly called "Computer Algebra" systems. They have names such as MACSYMA, MAPLE, muMATH, REDUCE and SMP. They are receiving credit as a computational aid with in creasing regularity in articles in the scientific and engineering literature. When most people think about computers and scientific research these days, they imagine a machine grinding away, processing numbers arithmetically. It is not generally realized that, for a number of years, computers have been performing non-numeric computations. This means, for example, that one inputs an equa tion and obtains a closed form analytic answer. It is these Computer Algebra systems, their capabilities, and applications which are the subject of the papers in this volume."
A t the terminal seated, the answering tone: pond and temple bell. ODAY as in the past, statistical method is profoundly affected by T resources for numerical calculation and visual display. The main line of development of statistical methodology during the first half of this century was conditioned by, and attuned to, the mechanical desk calculator. Now statisticians may use electronic computers of various kinds in various modes, and the character of statistical science has changed accordingly. Some, but not all, modes of modern computation have a flexibility and immediacy reminiscent of the desk calculator. They preserve the virtues of the desk calculator, while immensely exceeding its scope. Prominent among these is the computer language and conversational computing system known by the initials APL. This book is addressed to statisticians. Its first aim is to interest them in using APL in their work-for statistical analysis of data, for numerical support of theoretical studies, for simulation of random processes. In Part A the language is described and illustrated with short examples of statistical calculations. Part B, presenting some more extended examples of statistical analysis of data, has also the further aim of suggesting the interplay of computing and theory that must surely henceforth be typical of the develop ment of statistical science."
S is a high-level language for manipulating, analysing and displaying data. It forms the basis of two highly acclaimed and widely used data analysis software systems, the commercial S-PLUS(r) and the Open Source R. This book provides an in-depth guide to writing software in the S language under either or both of those systems. It is intended for readers who have some acquaintance with the S language and want to know how to use it more effectively, for example to build re-usable tools for streamlining routine data analysis or to implement new statistical methods. One of the outstanding strengths of the S language is the ease with which it can be extended by users. S is a functional language, and functions written by users are first-class objects treated in the same way as functions provided by the system. S code is eminently readable and so a good way to document precisely what algorithms were used, and as much of the implementations are themselves written in S, they can be studied as models and to understand their subtleties. The current implementations also provide easy ways for S functions to call compiled code written in C, Fortran and similar languages; this is documented here in depth. Increasingly S is being used for statistical or graphical analysis within larger software systems or for whole vertical-market applications. The interface facilities are most developed on Windows(r) and these are covered with worked examples. The authors have written the widely used Modern Applied Statistics with S-PLUS, now in its third edition, and several software libraries that enhance S-PLUS and R; these and the examples used in both books are available on the Internet. Dr. W.N. Venables is a senior Statistician with the CSIRO/CMIS Environmetrics Project in Australia, having been at the Department of Statistics, University of Adelaide for many years previously. Professor B.D. Ripley holds the Chair of Applied Statistics at the University of Oxford, and is the author of four other books on spatial statistics, simulation, pattern recognition and neural networks. Both authors are known and respected throughout the international S and R communities, for their books, workshops, short courses, freely available software and through their extensive contributions to the S-news and R mailing lists.
Keith M. Ponting Speech Research Unit, DERA Malvern St. Andrew's Road, Great Malvern, Worcs. WR14 3PS, UK email: ponting
Nonlinear physics continues to be an area of dynamic modern research, with applications to physics, engineering, chemistry, mathematics, computer science, biology, medicine and economics. In this text extensive use is made of the Mathematica computer algebra system. No prior knowledge of Mathematica or programming is assumed. This book includes 33 experimental activities that are designed to deepen and broaden the reader's understanding of nonlinear physics. These activities are correlated with Part I, the theoretical framework of the text.
Rcpp is the glue that binds the power and versatility of R with the speed and efficiency of C++. With Rcpp, the transfer of data between R and C++ is nearly seamless, and high-performance statistical computing is finally accessible to most R users. Rcpp should be part of every statistician's toolbox. -- Michael Braun, MIT Sloan School of Management "Seamless R and C++ integration with Rcpp" is simply a wonderful book. For anyone who uses C/C++ and R, it is an indispensable resource. The writing is outstanding. A huge bonus is the section on applications. This section covers the matrix packages Armadillo and Eigen and the GNU Scientific Library as well as RInside which enables you to use R inside C++. These applications are what most of us need to know to really do scientific programming with R and C++. I love this book. -- Robert McCulloch, University of Chicago Booth School of Business Rcpp is now considered an essential package for anybody doing serious computational research using R. Dirk's book is an excellent companion and takes the reader from a gentle introduction to more advanced applications via numerous examples and efficiency enhancing gems. The book is packed with all you might have ever wanted to know about Rcpp, its cousins (RcppArmadillo, RcppEigen .etc.), modules, package development and sugar. Overall, this book is a must-have on your shelf. -- Sanjog Misra, UCLA Anderson School of Management The Rcpp package represents a major leap forward for scientific computations with R. With very few lines of C++ code, one has R's data structures readily at hand for further computations in C++. Hence, high-level numerical programming can be made in C++ almost as easily as in R, but often with a substantial speed gain. Dirk is a crucial person in these developments, and his book takes the reader from the first fragile steps on to using the full Rcpp machinery. A very recommended book -- Soren Hojsgaard," "Department of Mathematical Sciences, Aalborg University, Denmark "Seamless R and C ++ Integration with Rcpp" provides the first comprehensive introduction to Rcpp. Rcpp has become the most widely-used language extension for R, and is deployed by over one-hundred different CRAN and BioConductor packages. Rcpp permits users to pass scalars, vectors, matrices, list or entire R objects back and forth between R and C++ with ease. This brings the depth of the R analysis framework together with the power, speed, and efficiency of C++. Dirk Eddelbuettel has been a contributor to CRAN for over a decade and maintains around twenty packages. He is the Debian/Ubuntu maintainer for R and other quantitative software, edits the CRAN Task Views for Finance and High-Performance Computing, is a co-founder of the annual R/Finance conference, and an editor of the Journal of Statistical Software. He holds a Ph.D. in Mathematical Economics from EHESS (Paris), and works in Chicago as a Senior Quantitative Analyst. "
"Fast Compact Algorithms and Software for Spline Smoothing" investigates algorithmic alternatives for computing cubic smoothing splines when the amount of smoothing is determined automatically by minimizing the generalized cross-validation score. These algorithms are based on Cholesky factorization, QR factorization, or the fast Fourier transform. All algorithms are implemented in MATLAB and are compared based on speed, memory use, and accuracy. An overall best algorithm is identified, which allows very large data sets to be processed quickly on a personal computer.
Grimmett, Geoffrey: Percolation and disordered systems.- Kesten, Harry: Aspects of first passage percolation. " |
You may like...
Essential Java for Scientists and…
Brian Hahn, Katherine Malan
Paperback
R1,266
Discovery Miles 12 660
Implementing CDISC Using SAS - An…
Chris Holland, Jack Shostak
Hardcover
R1,725
Discovery Miles 17 250
SAS Certification Prep Guide…
Joni N Shreve, Donna Dea Holland
Hardcover
R2,889
Discovery Miles 28 890
An Introduction to Creating Standardized…
Todd Case, Yuting Tian
Hardcover
R1,501
Discovery Miles 15 010
Mathematical Modeling for Smart…
Debabrata Samanta, Debabrata Singh
Hardcover
R11,427
Discovery Miles 114 270
SAS Text Analytics for Business…
Teresa Jade, Biljana Belamaric-Wilsey, …
Hardcover
R2,569
Discovery Miles 25 690
|