![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
An authoritative, up-to-date graduate textbook on machine learning that highlights its historical context and societal impacts Patterns, Predictions, and Actions introduces graduate students to the essentials of machine learning while offering invaluable perspective on its history and social implications. Beginning with the foundations of decision making, Moritz Hardt and Benjamin Recht explain how representation, optimization, and generalization are the constituents of supervised learning. They go on to provide self-contained discussions of causality, the practice of causal inference, sequential decision making, and reinforcement learning, equipping readers with the concepts and tools they need to assess the consequences that may arise from acting on statistical decisions. Provides a modern introduction to machine learning, showing how data patterns support predictions and consequential actions Pays special attention to societal impacts and fairness in decision making Traces the development of machine learning from its origins to today Features a novel chapter on machine learning benchmarks and datasets Invites readers from all backgrounds, requiring some experience with probability, calculus, and linear algebra An essential textbook for students and a guide for researchers
This proposed text appears to be a good introduction to evolutionary computation for use in applied statistics research. The authors draw from a vast base of knowledge about the current literature in both the design of evolutionary algorithms and statistical techniques. Modern statistical research is on the threshold of solving increasingly complex problems in high dimensions, and the generalization of its methodology to parameters whose estimators do not follow mathematically simple distributions is underway. Many of these challenges involve optimizing functions for which analytic solutions are infeasible. Evolutionary algorithms represent a powerful and easily understood means of approximating the optimum value in a variety of settings. The proposed text seeks to guide readers through the crucial issues of optimization problems in statistical settings and the implementation of tailored methods (including both stand-alone evolutionary algorithms and hybrid crosses of these procedures with standard statistical algorithms like Metropolis-Hastings) in a variety of applications. This book would serve as an excellent reference work for statistical researchers at an advanced graduate level or beyond, particularly those with a strong background in computer science.
Linear mixed-effects models (LMMs) are an important class of statistical models that can be used to analyze correlated data. Such data are encountered in a variety of fields including biostatistics, public health, psychometrics, educational measurement, and sociology. This book aims to support a wide range of uses for the models by applied researchers in those and other fields by providing state-of-the-art descriptions of the implementation of LMMs in R. To help readers to get familiar with the features of the models and the details of carrying them out in R, the book includes a review of the most important theoretical concepts of the models. The presentation connects theory, software and applications. It is built up incrementally, starting with a summary of the concepts underlying simpler classes of linear models like the classical regression model, and carrying them forward to LMMs. A similar step-by-step approach is used to describe the R tools for LMMs. All the classes of linear models presented in the book are illustrated using real-life data. The book also introduces several novel R tools for LMMs, including new class of variance-covariance structure for random-effects, methods for influence diagnostics and for power calculations. They are included into an R package that should assist the readers in applying these and other methods presented in this text.
Sampling consists of selection, acquisition, and quantification of a part of the population. While selection and acquisition apply to physical sampling units of the population, quantification pertains only to the variable of interest, which is a particular characteristic of the sampling units. A sampling procedure is expected to provide a sample that is representative with respect to some specified criteria. Composite sampling, under idealized conditions, incurs no loss of information for estimating the population means. But an important limitation to the method has been the loss of information on individual sample values, such as, the extremely large value. In many of the situations where individual sample values are of interest or concern, composite sampling methods can be suitably modified to retrieve the information on individual sample values that may be lost due to compositing. This book presents statistical solutions to issues that arise in the context of applications of composite sampling.
This book was written to provide resource materials for teachers to use in their introductory or intermediate statistics class. The chapter content is ordered along the lines of many popular statistics books so it should be easy to supplement the content and exercises with class lecture materials. The book contains R script programs to demonstrate important topics and concepts covered in a statistics course, including probability, random sampling, population distribution types, role of the Central Limit Theorem, creation of sampling distributions for statistics, and more. The chapters contain T/F quizzes to test basic knowledge of the topics covered. In addition, the book chapters contain numerous exercises with answers or solutions to the exercises provided. The chapter exercises reinforce an understanding of the statistical concepts presented in the chapters. An instructor can select any of the supplemental materials to enhance lectures and/or provide additional coverage of concepts and topics in their statistics book. This book uses the R statistical package which contains an extensive library of functions. The R software is free and easily downloaded and installed. The R programs are run in the R Studio software which is a graphical user interface for Windows. The R Studio software makes accessing R programs, viewing output from the exercises, and graphical displays easier to manage. The first chapter of the book covers the fundamentals of the R statistical package. This includes installation of R and R Studio, accessing R packages and libraries of functions. The chapter also covers how to access manuals and technical documentation, as well as, basic R commands used in the R script programs in the chapters. This chapter is important for the instructor to master so that the software can be installed and the R script programs run. The R software is free so students can also install the software and run the R script programs in the chapters. Teachers and students can run the R software on university computers, at home, or on laptop computers making it more available than many commercial software packages. "
With the increasing advances in hardware technology for data collection, and advances in software technology (databases) for data organization, computer scientists have increasingly participated in the latest advancements of the outlier analysis field. Computer scientists, specifically, approach this field based on their practical experiences in managing large amounts of data, and with far fewer assumptions- the data can be of any type, structured or unstructured, and may be extremely large. Outlier Analysis is a comprehensive exposition, as understood by data mining experts, statisticians and computer scientists. The book has been organized carefully, and emphasis was placed on simplifying the content, so that students and practitioners can also benefit. Chapters will typically cover one of three areas: methods and techniques commonly used in outlier analysis, such as linear methods, proximity-based methods, subspace methods, and supervised methods; data domains, such as, text, categorical, mixed-attribute, time-series, streaming, discrete sequence, spatial and network data; and key applications of these methods as applied to diverse domains such as credit card fraud detection, intrusion detection, medical diagnosis, earth science, web log analytics, and social network analysis are covered.
This book evolved from lectures, courses and workshops on missing data and small-area estimation that I presented during my tenure as the ?rst C- pion Fellow (2000-2002). For the Fellowship I proposed these two topics as areas in which the academic statistics could contribute to the development of government statistics, in exchange for access to the operational details and background that would inform the direction and sharpen the focus of a- demic research. After a few years of involvement, I have come to realise that the separation of 'academic' and 'industrial' statistics is not well suited to either party, and their integration is the key to progress in both branches. Most of the work on this monograph was done while I was a visiting l- turer at Massey University, Palmerston North, New Zealand. The hospitality and stimulating academic environment of their Institute of Information S- ence and Technology is gratefully acknowledged. I could not name all those who commented on my lecture notes and on the presentations themselves; apart from them, I want to thank the organisers and silent attendees of all the events, and, with a modicum of reluctance, the 'grey ?gures' who kept inquiring whether I was any nearer the completion of whatever stage I had been foolish enough to attach a date.
This first book in the series will describe the Net Generation as visual learners who thrive when surrounded with new technologies and whose needs can be met with the technological innovations. These new learners seek novel ways of studying, such as collaborating with peers, multitasking, as well as use of multimedia, the Internet, and other Information and Communication Technologies. Here we present mathematics as a contemporary subject that is engaging, exciting and enlightening in new ways. For example, in the distributed environment of cyber space, mathematics learners play games, watch presentations on YouTube, create Java applets of mathematics simulations and exchange thoughts over the Instant Messaging tool. How should mathematics education resonate with these learners and technological novelties that excite them?
Essentials of Monte Carlo Simulation focuses on the fundamentals of Monte Carlo methods using basic computer simulation techniques. The theories presented in this text deal with systems that are too complex to solve analytically. As a result, readers are given a system of interest and constructs using computer code, as well as algorithmic models to emulate how the system works internally. After the models are run several times, in a random sample way, the data for each output variable(s) of interest is analyzed by ordinary statistical methods. This book features 11 comprehensive chapters, and discusses such key topics as random number generators, multivariate random variates, and continuous random variates. Over 100 numerical examples are presented as part of the appendix to illustrate useful real world applications. The text also contains an easy to read presentation with minimal use of difficult mathematical concepts. Very little has been published in the area of computer Monte Carlo simulation methods, and this book will appeal to students and researchers in the fields of Mathematics and Statistics.
"This volume provides essential guidance for transforming
mathematics learning in schools through the use of innovative
technology, pedagogy, and curriculum. It presents clear, rigorous
evidence of the impact technology can have in improving students
learning of important yet complex mathematical concepts -- and goes
beyond a focus on technology alone to clearly explain how teacher
professional development, pedagogy, curriculum, and student
participation and identity each play an essential role in
transforming mathematics classrooms with technology. Further,
evidence of effectiveness is complemented by insightful case
studies of how key factors lead to enhancing learning, including
the contributions of design research, classroom discourse, and
meaningful assessment. "* Engaging students in deeply learning the important concepts
in mathematics "* Engaging students in deeply learning the important concepts
in mathematics
This edited survey book consists of 20 chapters showing application of Clifford algebra in quantum mechanics, field theory, spinor calculations, projective geometry, Hypercomplex algebra, function theory and crystallography. Many examples of computations performed with a variety of readily available software programs are presented in detail.
"Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis, Second Edition, "provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. This new edition contains six new sections, in addition to fully-updated examples, tables, figures, and a revised appendix. Intended primarily for practitioners, this book does not require sophisticated mathematical skills or deep understanding of the underlying theory and methods nor does it discuss alternative technologies for reasoning under uncertainty. The theory and methods presented are illustrated through more than 140 examples, and exercises are included for the reader to check his or her level of understanding. The techniques and methods presented for knowledge elicitation, model construction and verification, modeling techniques and tricks, learning models from data, and analyses of models have all been developed and refined on the basis of numerous courses that the authors have held for practitioners worldwide. "
This is the first book to show the capabilities of Microsoft Excel to teach biological and life sciences statistics effectively. It is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical science problems. If understanding statistics isn't your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you. Excel, a widely available computer program for students and managers, is also an effective teaching and learning tool for quantitative analyses in science courses. Its powerful computational ability and graphical functions make learning statistics much easier than in years past. However, Excel 2007 for Biological and Life Sciences Statistics: A Guide to Solving Practical Problems is the first book to capitalize on these improvements by teaching students and managers how to apply Excel to statistical techniques necessary in their courses and work. Each chapter explains statistical formulas and directs the reader to use Excel commands to solve specific, easy-to-understand science problems. Practice problems are provided at the end of each chapter with their solutions in an appendix. Separately, there is a full Practice Test (with answers in an Appendix) that allows readers to test what they have learned.
Intended for both researchers and practitioners, this book will be a valuable resource for studying and applying recent robust statistical methods. It contains up-to-date research results in the theory of robust statistics Treats computational aspects and algorithms and shows interesting and new applications.
Automatic Graph Drawing is concerned with the layout of relational structures as they occur in Computer Science (Data Base Design, Data Mining, Web Mining), Bioinformatics (Metabolic Networks), Businessinformatics (Organization Diagrams, Event Driven Process Chains), or the Social Sciences (Social Networks). In mathematical terms, such relational structures are modeled as graphs or more general objects such as hypergraphs, clustered graphs, or compound graphs. A variety of layout algorithms that are based on graph theoretical foundations have been developed in the last two decades and implemented in software systems. After an introduction to the subject area and a concise treatment of the technical foundations for the subsequent chapters, this book features 14 chapters on state-of-the-art graph drawing software systems, ranging from general "tool boxes'' to customized software for various applications. These chapters are written by leading experts, they follow a uniform scheme and can be read independently from each other.
Looking back at the years that have passed since the realization of the very first electronic, multi-purpose computers, one observes a tremendous growth in hardware and software performance. Today, researchers and engi neers have access to computing power and software that can solve numerical problems which are not fully understood in terms of existing mathemati cal theory. Thus, computational sciences must in many respects be viewed as experimental disciplines. As a consequence, there is a demand for high quality, flexible software that allows, and even encourages, experimentation with alternative numerical strategies and mathematical models. Extensibil ity is then a key issue; the software must provide an efficient environment for incorporation of new methods and models that will be required in fu ture problem scenarios. The development of such kind of flexible software is a challenging and expensive task. One way to achieve these goals is to in vest much work in the design and implementation of generic software tools which can be used in a wide range of application fields. In order to provide a forum where researchers could present and discuss their contributions to the described development, an International Work shop on Modern Software Tools for Scientific Computing was arranged in Oslo, Norway, September 16-18, 1996. This workshop, informally referred to as Sci Tools '96, was a collaboration between SINTEF Applied Mathe matics and the Departments of Informatics and Mathematics at the Uni versity of Oslo."
This Handbook gives a comprehensive snapshot of a field at the intersection of mathematics and computer science with applications in physics, engineering and education. Reviews 67 software systems and offers 100 pages on applications in physics, mathematics, computer science, engineering chemistry and education.
The advent of fast and sophisticated computer graphics has brought dynamic and interactive images under the control of professional mathematicians and mathematics teachers. This volume in the NATO Special Programme on Advanced Educational Technology takes a comprehensive and critical look at how the computer can support the use of visual images in mathematical problem solving. The contributions are written by researchers and teachers from a variety of disciplines including computer science, mathematics, mathematics education, psychology, and design. Some focus on the use of external visual images and others on the development of individual mental imagery. The book is the first collected volume in a research area that is developing rapidly, and the authors pose some challenging new questions.
This book deals with the performance analysis of closed queueing networks with general processing times and finite buffer spaces. It offers a detailed introduction to the problem and a comprehensive literature review. Two approaches to the performance of closed queueing networks are presented. One is an approximate decomposition approach, while the second is the first exact approach for finite-capacity networks with general processing times. In this Markov chain approach, queueing networks are analyzed by modeling the entire system as one Markov chain. As this approach is exact, it is well-suited both as a reference quantity for approximate procedures and as extension to other queueing networks. Moreover, for the first time, the exact distribution of the time between processing starts is provided.
This comprehensive text covers the use of SAS for epidemiology and public health research. Developed with students in mind and from their feedback, the text addresses this material in a straightforward manner with a multitude of examples. It is directly applicable to students and researchers in the fields of public health, biostatistics and epidemiology. Through a hands on approach to the use of SAS for a broad number of epidemiologic analyses, readers learn techniques for data entry and cleaning, categorical analysis, ANOVA, and linear regression and much more. Exercises utilizing real-world data sets are featured throughout the book. SAS screen shots demonstrate the steps for successful programming. SAS (Statistical Analysis System) is an integrated system of software products provided by the SAS institute, which is headquartered in California. It provides programmers and statisticians the ability to engage in many sophisticated statistical analyses and data retrieval and mining exercises. SAS is widely used in the fields of epidemiology and public healthresearch, predominately due to its ability to reliably analyze very large administrative data sets, as well as more commonly encountered clinical trial and observational research data. "
Developments in both computer hardware and Perhaps the greatest impact has been felt by the software over the decades have fundamentally education community. Today, it is nearly changed the way people solve problems. impossible to find a college or university that has Technical professionals have greatly benefited not introduced mathematical computation in from new tools and techniques that have allowed some form, into the curriculum. Students now them to be more efficient, accurate, and creative have regular access to the amount of in their work. computational power that were available to a very exclusive set of researchers five years ago. This Maple V and the new generation of mathematical has produced tremendous pedagogical computation systems have the potential of challenges and opportunities. having the same kind of revolutionary impact as high-level general purpose programming Comparisons to the calculator revolution of the languages (e.g. FORTRAN, BASIC, C), 70's are inescapable. Calculators have application software (e.g. spreadsheets, extended the average person's ability to solve Computer Aided Design - CAD), and even common problems more efficiently, and calculators have had. Maple V has amplified our arguably, in better ways. Today, one needs at mathematical abilities: we can solve more least a calculator to deal with standard problems problems more accurately, and more often. In in life -budgets, mortgages, gas mileage, etc. specific disciplines, this amplification has taken For business people or professionals, the excitingly different forms.
"Fast Compact Algorithms and Software for Spline Smoothing" investigates algorithmic alternatives for computing cubic smoothing splines when the amount of smoothing is determined automatically by minimizing the generalized cross-validation score. These algorithms are based on Cholesky factorization, QR factorization, or the fast Fourier transform. All algorithms are implemented in MATLAB and are compared based on speed, memory use, and accuracy. An overall best algorithm is identified, which allows very large data sets to be processed quickly on a personal computer.
Accurate and efficient computer algorithms for factoring matrices, solving linear systems of equations, and extracting eigenvalues and eigenvectors. Regardless of the software system used, the book describes and gives examples of the use of modern computer software for numerical linear algebra. It begins with a discussion of the basics of numerical computations, and then describes the relevant properties of matrix inverses, factorisations, matrix and vector norms, and other topics in linear algebra. The book is essentially self- contained, with the topics addressed constituting the essential material for an introductory course in statistical computing. Numerous exercises allow the text to be used for a first course in statistical computing or as supplementary text for various courses that emphasise computations.
Molchanov, S.: Lectures on random media.- Zeitouni, Ofer: Random walks in random environment.-den Hollander, Frank: Random polymers "
Dealing with methods for sampling from posterior distributions and how to compute posterior quantities of interest using Markov chain Monte Carlo (MCMC) samples, this book addresses such topics as improving simulation accuracy, marginal posterior density estimation, estimation of normalizing constants, constrained parameter problems, highest posterior density interval calculations, computation of posterior modes, and posterior computations for proportional hazards models and Dirichlet process models. The authors also discuss model comparisons, including both nested and non-nested models, marginal likelihood methods, ratios of normalizing constants, Bayes factors, the Savage-Dickey density ratio, Stochastic Search Variable Selection, Bayesian Model Averaging, the reverse jump algorithm, and model adequacy using predictive and latent residual approaches. The book presents an equal mixture of theory and applications involving real data, and is intended as a graduate textbook or a reference book for a one-semester course at the advanced masters or Ph.D. level. It will also serve as a useful reference for applied or theoretical researchers as well as practitioners. |
![]() ![]() You may like...
Signals and Systems - A Primer with…
Matthew N.O Sadiku, Warsame Hassan Ali
Paperback
R1,461
Discovery Miles 14 610
Sparse Graphical Modeling for High…
Faming Liang, Bochao Jia
Hardcover
R2,777
Discovery Miles 27 770
Linear Mixed Models - A Practical Guide…
Brady T. West, Kathleen B. Welch, …
Hardcover
R2,777
Discovery Miles 27 770
IBM SPSS Statistics 27 Step by Step - A…
Darren George, Paul Mallery
Hardcover
R6,428
Discovery Miles 64 280
IBM SPSS Statistics 27 Step by Step - A…
Darren George, Paul Mallery
Paperback
R2,150
Discovery Miles 21 500
|