![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
This book provides anintroduction to multistate event history analysis. It is an extension of survival analysis, in which a single terminal event (endpoint) is considered and the time-to-event is studied. Multistate models focus on life histories or trajectories, conceptualized as sequences of states and sequences of transitions between states. Life histories are modeled as realizations of continuous-time Markov processes. The model parameters, transition rates, are estimated from data on event counts and populations at risk, using the statistical theory of counting processes. The Comprehensive R Network Archive (CRAN) includes several packages for multistate modeling. This book is about "Biograph." The package is designed to (a) enhance exploratory analysis of life histories and (b) make multistate modeling accessible. The package incorporates utilities that connect to several packages for multistate modeling, including "survival," "eha," "Epi," "mvna," " etm," "mstate," "msm," and "TraMineR" for sequence analysis. The book is a hands-on presentation of "Biograph" and the packages listed. It is written from the perspective of the user. To help the user master the techniques and the software, a single data set is used to illustrate the methods and software. It is the subsample of the German Life History Survey, which was also used by Blossfeld and Rohwer in their popular textbook on event history modeling. Another data set, the Netherlands Family and Fertility Survey, is used to illustrate how "Biograph" can assist in answering questions on life paths of cohorts and individuals. The book is suitable as a textbook for graduate courses on event history analysis and introductory courses on competing risks and multistate models. It may also be used as a self-study book. The R code used in the book is available online. Frans Willekens is affiliated with the Max Planck Institute for Demographic Research (MPIDR) in Rostock, Germany. He is Emeritus Professor of Demography at the University of Groningen, a Honorary Fellow of the Netherlands Interdisciplinary Demographic Institute (NIDI) in the Hague, and a Research Associate of the International Institute for Applied Systems Analysis (IIASA), Laxenburg, Austria. He is a member of Royal Netherlands Academy of Arts and Sciences (KNAW). He has contributed to the modeling and simulation of life histories, mainly in the context of population forecasting."
The theme of the meeting was Statistical Methods for the
Analysis of Large Data-Sets . In recent years there has been
increasing interest in this subject; in fact a huge quantity of
information is often available but standard statistical techniques
are usually not well suited to managing this kind of data. The
conference serves as an important meeting point for European
researchers working on this topic and a number of European
statistical societies participated in the organization of the
event.
"Functional and Phylogenetic Ecology in R" is designed to teach readers to use R for phylogenetic and functional trait analyses. Over the past decade, a dizzying array of tools and methods were generated to incorporate phylogenetic and functional information into traditional ecological analyses. Increasingly these tools are implemented in R, thus greatly expanding their impact. Researchers getting started in R can use this volume as a step-by-step entryway into phylogenetic and functional analyses for ecology in R. More advanced users will be able to use this volume as a quick reference to understand particular analyses. The volume begins with an introduction to the R environment and handling relevant data in R. Chapters then cover phylogenetic and functional metrics of biodiversity; null modeling and randomizations for phylogenetic and functional trait analyses; integrating phylogenetic and functional trait information; and interfacing the R environment with a popular C-based program. This book presents a unique approach through its focus on ecological analyses and not macroevolutionary analyses. The author provides his own code, so that the reader is guided through the computational steps to calculate the desired metrics. This guided approach simplifies the work of determining which package to use for any given analysis. Example datasets are shared to help readers practice, and readers can then quickly turn to their own datasets.
This book collects the proceedings of the 10th Workshop on Model-Oriented Design and Analysis (mODa). A model-oriented view on the design of experiments, which is the unifying theme of all mODa meetings, assumes some knowledge of the form of the data-generating process and naturally leads to the so-called optimum experimental design. Its theory and practice have since become important in many scientific and technological fields, ranging from optimal designs for dynamic models in pharmacological research, to designs for industrial experimentation, to designs for simulation experiments in environmental risk management, to name but a few. The methodology has become even more important in recent years because of the increased speed of scientific developments, the complexity of the systems currently under investigation and the mounting pressure on businesses, industries and scientific researchers to reduce product and process development times. This increased competition requires ever increasing efficiency in experimentation, thus necessitating new statistical designs. This book presents a rich collection of carefully selected contributions ranging from statistical methodology to emerging applications. It primarily aims to provide an overview of recent advances and challenges in the field, especially in the context of new formulations, methods and state-of-the-art algorithms. The topics included in this volume will be of interest to all scientists and engineers and statisticians who conduct experiments.
This small book addresses different kinds of datafiles, as commonly encountered in clinical research, and their data-analysis on SPSS Software. Some 15 years ago serious statistical analyses were conducted by specialist statisticians using ma- frame computers. Nowadays, there is ready access to statistical computing using personal computers or laptops, and this practice has changed boundaries between basic statistical methods that can be conveniently carried out on a pocket calculator and more advanced statistical methods that can only be executed on a computer. Clinical researchers currently perform basic statistics without professional help from a statistician, including t-tests and chi-square tests. With help of user-friendly software the step from such basic tests to more complex tests has become smaller, and more easy to take. It is our experience as masters' and doctorate class teachers of the European College of Pharmaceutical Medicine (EC Socrates Project Lyon France) that s- dents are eager to master adequate command of statistical software for that purpose. However, doing so, albeit easy, still takes 20-50 steps from logging in to the final result, and all of these steps have to be learned in order for the procedures to be successful.
Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data structures that are built into the Python language are explained, and the user is shown how to implement and evaluate others.
This is an introduction to time series that emphasizes methods and analysis of data sets. The logic and tools of model-building for stationary and non-stationary time series are developed and numerous exercises, many of which make use of the included computer package, provide the reader with ample opportunity to develop skills. Statisticians and students will learn the latest methods in time series and forecasting, along with modern computational models and algorithms.
This volume presents theoretical developments, applications and
computational methods for the analysis and modeling in behavioral
and social sciences where data are usually complex to explore and
investigate. The challenging proposals provide a connection between
statistical methodology and the social domain with particular
attention to computational issues in order to effectively address
complicated data analysis problems.
Nonlinear physics continues to be an area of dynamic modern research, with applications to physics, engineering, chemistry, mathematics, computer science, biology, medicine and economics. In this text extensive use is made of the Mathematica computer algebra system. No prior knowledge of Mathematica or programming is assumed. This book includes 33 experimental activities that are designed to deepen and broaden the reader's understanding of nonlinear physics. These activities are correlated with Part I, the theoretical framework of the text.
Post-Optimal Analysis in Linear Semi-Infinite Optimization examines the following topics in regards to linear semi-infinite optimization: modeling uncertainty, qualitative stability analysis, quantitative stability analysis and sensitivity analysis. Linear semi-infinite optimization (LSIO) deals with linear optimization problems where the dimension of the decision space or the number of constraints is infinite. The authors compare the post-optimal analysis with alternative approaches to uncertain LSIO problems and provide readers with criteria to choose the best way to model a given uncertain LSIO problem depending on the nature and quality of the data along with the available software. This work also contains open problems which readers will find intriguing a challenging. Post-Optimal Analysis in Linear Semi-Infinite Optimization is aimed toward researchers, graduate and post-graduate students of mathematics interested in optimization, parametric optimization and related topics.
This proposed text appears to be a good introduction to evolutionary computation for use in applied statistics research. The authors draw from a vast base of knowledge about the current literature in both the design of evolutionary algorithms and statistical techniques. Modern statistical research is on the threshold of solving increasingly complex problems in high dimensions, and the generalization of its methodology to parameters whose estimators do not follow mathematically simple distributions is underway. Many of these challenges involve optimizing functions for which analytic solutions are infeasible. Evolutionary algorithms represent a powerful and easily understood means of approximating the optimum value in a variety of settings. The proposed text seeks to guide readers through the crucial issues of optimization problems in statistical settings and the implementation of tailored methods (including both stand-alone evolutionary algorithms and hybrid crosses of these procedures with standard statistical algorithms like Metropolis-Hastings) in a variety of applications. This book would serve as an excellent reference work for statistical researchers at an advanced graduate level or beyond, particularly those with a strong background in computer science.
Modern algorithmic techniques for summation, most of which were introduced in the 1990s, are developed here and carefully implemented in the computer algebra system Maple (TM). The algorithms of Fasenmyer, Gosper, Zeilberger, Petkovsek and van Hoeij for hypergeometric summation and recurrence equations, efficient multivariate summation as well as q-analogues of the above algorithms are covered. Similar algorithms concerning differential equations are considered. An equivalent theory of hyperexponential integration due to Almkvist and Zeilberger completes the book. The combination of these results gives orthogonal polynomials and (hypergeometric and q-hypergeometric) special functions a solid algorithmic foundation. Hence, many examples from this very active field are given. The materials covered are suitable for an introductory course on algorithmic summation and will appeal to students and researchers alike.
This book presents methods for computing correlation equations. All the topics treated hefe are eluci dated in terms of concrete examples, which have been chosen, for the most part, from the Held of analysis of the mechanical properties of steel, wood, and other materials. A necessary prerequisite for any study of correlation equations is so me knowledge of the moments of random variables. In the Appendix, there is provided a brief treatment of moments, as well as a discussion of the simplest methods of computing them. We have paid particular attention in this book to the techniques of computing correlation equations, and to the use of tables for alleviating the computationalload. The mathematical bases of the methods used in setting up correlation equations are expounded in the books cited at the end of this volume. A. M. December 1965 PIe ase note that the abbreviation 19 is used in this book to designate the logarithm to base ten, Note further that the comma has been retained as the decimal point in tabular material."
Sampling consists of selection, acquisition, and quantification of a part of the population. While selection and acquisition apply to physical sampling units of the population, quantification pertains only to the variable of interest, which is a particular characteristic of the sampling units. A sampling procedure is expected to provide a sample that is representative with respect to some specified criteria. Composite sampling, under idealized conditions, incurs no loss of information for estimating the population means. But an important limitation to the method has been the loss of information on individual sample values, such as, the extremely large value. In many of the situations where individual sample values are of interest or concern, composite sampling methods can be suitably modified to retrieve the information on individual sample values that may be lost due to compositing. This book presents statistical solutions to issues that arise in the context of applications of composite sampling.
Containing a summary of several recent results on Markov-based input modeling in a coherent notation, this book introduces and compares algorithms for parameter fitting and gives an overview of available software tools in the area. Due to progress made in recent years with respect to new algorithms to generate PH distributions and Markovian arrival processes from measured data, the models outlined are useful alternatives to other distributions or stochastic processes used for input modeling. Graduate students and researchers in applied probability, operations research and computer science along with practitioners using simulation or analytical models for performance analysis and capacity planning will find the unified notation and up-to-date results presented useful. Input modeling is the key step in model based system analysis to adequately describe the load of a system using stochastic models. The goal of input modeling is to find a stochastic model to describe a sequence ofmeasurements from a real system to model for example the inter-arrival times of packets in a computer network or failure times of components in a manufacturing plant. Typical application areas are performance and dependability analysis of computer systems, communication networks, logistics or manufacturing systems but also the analysis of biological or chemical reaction networks and similar problems. Often the measured values have a high variability and are correlated. It s been known for a long time that Markov based models like phase type distributions or Markovian arrival processes are very general and allow one to capture even complex behaviors. However, the parameterization of these models results often in a complex and non-linear optimization problem. Only recently, several new results about the modeling capabilities of Markov based models and algorithms to fit the parameters of those models have been published. "
Recommended by Bill Gates A thought-provoking and wide-ranging exploration of machine learning and the race to build computer intelligences as flexible as our own In the world's top research labs and universities, the race is on to invent the ultimate learning algorithm: one capable of discovering any knowledge from data, and doing anything we want, before we even ask. In The Master Algorithm, Pedro Domingos lifts the veil to give us a peek inside the learning machines that power Google, Amazon, and your smartphone. He assembles a blueprint for the future universal learner--the Master Algorithm--and discusses what it will mean for business, science, and society. If data-ism is today's philosophy, this book is its bible.
The first MATLAB-based numerical methods textbook for bioengineers that uniquely integrates modelling concepts with statistical analysis, while maintaining a focus on enabling the user to report the error or uncertainty in their result. Between traditional numerical method topics of linear modelling concepts, nonlinear root finding, and numerical integration, chapters on hypothesis testing, data regression and probability are interweaved. A unique feature of the book is the inclusion of examples from clinical trials and bioinformatics, which are not found in other numerical methods textbooks for engineers. With a wealth of biomedical engineering examples, case studies on topical biomedical research, and the inclusion of end of chapter problems, this is a perfect core text for a one-semester undergraduate course.
Separation of signal from noise is the most fundamental problem in data analysis, arising in such fields as: signal processing, econometrics, actuarial science, and geostatistics. This book introduces the local regression method in univariate and multivariate settings, with extensions to local likelihood and density estimation. Practical information is also included on how to implement these methods in the programs S-PLUS and LOCFIT.
Looking back at the years that have passed since the realization of the very first electronic, multi-purpose computers, one observes a tremendous growth in hardware and software performance. Today, researchers and engi neers have access to computing power and software that can solve numerical problems which are not fully understood in terms of existing mathemati cal theory. Thus, computational sciences must in many respects be viewed as experimental disciplines. As a consequence, there is a demand for high quality, flexible software that allows, and even encourages, experimentation with alternative numerical strategies and mathematical models. Extensibil ity is then a key issue; the software must provide an efficient environment for incorporation of new methods and models that will be required in fu ture problem scenarios. The development of such kind of flexible software is a challenging and expensive task. One way to achieve these goals is to in vest much work in the design and implementation of generic software tools which can be used in a wide range of application fields. In order to provide a forum where researchers could present and discuss their contributions to the described development, an International Work shop on Modern Software Tools for Scientific Computing was arranged in Oslo, Norway, September 16-18, 1996. This workshop, informally referred to as Sci Tools '96, was a collaboration between SINTEF Applied Mathe matics and the Departments of Informatics and Mathematics at the Uni versity of Oslo."
The advent of fast and sophisticated computer graphics has brought dynamic and interactive images under the control of professional mathematicians and mathematics teachers. This volume in the NATO Special Programme on Advanced Educational Technology takes a comprehensive and critical look at how the computer can support the use of visual images in mathematical problem solving. The contributions are written by researchers and teachers from a variety of disciplines including computer science, mathematics, mathematics education, psychology, and design. Some focus on the use of external visual images and others on the development of individual mental imagery. The book is the first collected volume in a research area that is developing rapidly, and the authors pose some challenging new questions.
Developments in both computer hardware and Perhaps the greatest impact has been felt by the software over the decades have fundamentally education community. Today, it is nearly changed the way people solve problems. impossible to find a college or university that has Technical professionals have greatly benefited not introduced mathematical computation in from new tools and techniques that have allowed some form, into the curriculum. Students now them to be more efficient, accurate, and creative have regular access to the amount of in their work. computational power that were available to a very exclusive set of researchers five years ago. This Maple V and the new generation of mathematical has produced tremendous pedagogical computation systems have the potential of challenges and opportunities. having the same kind of revolutionary impact as high-level general purpose programming Comparisons to the calculator revolution of the languages (e.g. FORTRAN, BASIC, C), 70's are inescapable. Calculators have application software (e.g. spreadsheets, extended the average person's ability to solve Computer Aided Design - CAD), and even common problems more efficiently, and calculators have had. Maple V has amplified our arguably, in better ways. Today, one needs at mathematical abilities: we can solve more least a calculator to deal with standard problems problems more accurately, and more often. In in life -budgets, mortgages, gas mileage, etc. specific disciplines, this amplification has taken For business people or professionals, the excitingly different forms.
This book presents the statistical analysis of compositional data sets, i.e., data in percentages, proportions, concentrations, etc. The subject is covered from its grounding principles to the practical use in descriptive exploratory analysis, robust linear models and advanced multivariate statistical methods, including zeros and missing values, and paying special attention to data visualization and model display issues. Many illustrated examples and code chunks guide the reader into their modeling and interpretation. And, though the book primarily serves as a reference guide for the R package "compositions," it is also a general introductory text on Compositional Data Analysis. Awareness of their special characteristics spread in the Geosciences in the early sixties, but a strategy for properly dealing with them was not available until the works of Aitchison in the eighties. Since then, research has expanded our understanding of their theoretical principles and the potentials and limitations of their interpretation. This is the first comprehensive textbook addressing these issues, as well as their practical implications with regard to software. The book is intended for scientists interested in statistically analyzing their compositional data. The subject enjoys relatively broad awareness in the geosciences and environmental sciences, but the spectrum of recent applications also covers areas like medicine, official statistics, and economics. Readers should be familiar with basic univariate and multivariate statistics. Knowledge of R is recommended but not required, as the book is self-contained.
Intended for both researchers and practitioners, this book will be a valuable resource for studying and applying recent robust statistical methods. It contains up-to-date research results in the theory of robust statistics Treats computational aspects and algorithms and shows interesting and new applications.
Many of the commonly used methods for modeling and fitting psychophysical data are special cases of statistical procedures of great power and generality, notably the Generalized Linear Model (GLM). This book illustrates how to fit data from a variety of psychophysical paradigms using modern statistical methods and the statistical language R.The paradigms include signal detection theory, psychometric function fitting, classification images and more. In two chapters, recently developed methods for scaling appearance, maximum likelihood difference scaling and maximum likelihood conjoint measurement are examined.The authors also consider the applicationof mixed-effects models to psychophysical data. R is an open-source programming language that is widely used by statisticians and is seeing enormous growth in its application to data in all fields. It is interactive, containing many powerful facilities for optimization, model evaluation, model selection, and graphical display of data. The reader who fits data in R can readily make use of these methods. The researcher who uses R to fit and model his data has access to most recently developed statistical methods. This book does not assume that the reader is familiar with R,
and a little experience with any programming language is all that
is needed to appreciate this book. There are large numbers of
examples of R in the text and the source code for all examples is
available in an R package MPDiR available through R. Laurence T. Maloney is Professor of Psychology and Neural Science at New York University. His research focusses on applications of mathematical models to perception, motor control and decision making."
Every advance in computer architecture and software tempts statisticians to tackle numerically harder problems. To do so intelligently requires a good working knowledge of numerical analysis. This book equips students to craft their own software and to understand the advantages and disadvantages of different numerical methods. Issues of numerical stability, accurate approximation, computational complexity, and mathematical modeling share the limelight in a broad yet rigorous overview of those parts of numerical analysis most relevant to statisticians. In this second edition, the material on optimization has been completely rewritten. There is now an entire chapter on the MM algorithm in addition to more comprehensive treatments of constrained optimization, penalty and barrier methods, and model selection via the lasso. There is also new material on the Cholesky decomposition, Gram-Schmidt orthogonalization, the QR decomposition, the singular value decomposition, and reproducing kernel Hilbert spaces. The discussions of the bootstrap, permutation testing, independent Monte Carlo, and hidden Markov chains are updated, and a new chapter on advanced MCMC topics introduces students to Markov random fields, reversible jump MCMC, and convergence analysis in Gibbs sampling. Numerical Analysis for Statisticians can serve as a graduate text for a course surveying computational statistics. With a careful selection of topics and appropriate supplementation, it can be used at the undergraduate level. It contains enough material for a graduate course on optimization theory. Because many chapters are nearly self-contained, professional statisticians will also find the book useful as a reference. |
You may like...
Essential Java for Scientists and…
Brian Hahn, Katherine Malan
Paperback
R1,266
Discovery Miles 12 660
Spatial Regression Analysis Using…
Daniel A. Griffith, Yongwan Chun, …
Paperback
R3,015
Discovery Miles 30 150
Jump into JMP Scripting, Second Edition…
Wendy Murphrey, Rosemary Lucas
Hardcover
R1,530
Discovery Miles 15 300
Mathematical Modeling for Smart…
Debabrata Samanta, Debabrata Singh
Hardcover
R11,427
Discovery Miles 114 270
SAS Text Analytics for Business…
Teresa Jade, Biljana Belamaric-Wilsey, …
Hardcover
R2,569
Discovery Miles 25 690
The Little SAS Enterprise Guide Book
Susan J Slaughter, Lora D Delwiche
Hardcover
R1,790
Discovery Miles 17 900
|