![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
In the last decade, the boundary between physics and computer science has become a hotbed of interdisciplinary collaboration. Every passing year shows that physicists and computer scientists have a great deal to say to each other, sharing metaphors, intuitions, and mathematical techniques. In this book, two leading researchers in this area introduce the reader to the fundamental concepts of computational complexity. They go beyond the usual discussion of P, NP and NP-completeness to explain the deep meaning of the P vs. NP question, and explain many recent results which have not yet appeared in any textbook. They then give in-depth explorations of the major interfaces between computer science and physics: phase transitions in NP-complete problems, Monte Carlo algorithms, and quantum computing. The entire book is written in an informal style that gives depth with a minimum of mathematical formalism, exposing the heart of the matter without belabouring technical details. The only mathematical prerequisites are linear algebra, complex numbers, and Fourier analysis (and most chapters can be understood without even these). It can be used as a textbook for graduate students or advanced undergraduates, and will be enjoyed by anyone who is interested in understanding the rapidly changing field of theoretical computer science and its relationship with other sciences.
Reproducible Finance with R: Code Flows and Shiny Apps for Portfolio Analysis is a unique introduction to data science for investment management that explores the three major R/finance coding paradigms, emphasizes data visualization, and explains how to build a cohesive suite of functioning Shiny applications. The full source code, asset price data and live Shiny applications are available at reproduciblefinance.com. The ideal reader works in finance or wants to work in finance and has a desire to learn R code and Shiny through simple, yet practical real-world examples. The book begins with the first step in data science: importing and wrangling data, which in the investment context means importing asset prices, converting to returns, and constructing a portfolio. The next section covers risk and tackles descriptive statistics such as standard deviation, skewness, kurtosis, and their rolling histories. The third section focuses on portfolio theory, analyzing the Sharpe Ratio, CAPM, and Fama French models. The book concludes with applications for finding individual asset contribution to risk and for running Monte Carlo simulations. For each of these tasks, the three major coding paradigms are explored and the work is wrapped into interactive Shiny dashboards.
This book presents the basic procedures for utilizing SAS Enterprise Guide to analyze statistical data. SAS Enterprise Guide is a graphical user interface (point and click) to the main SAS application. Each chapter contains a brief conceptual overview and then guides the reader through concrete step-by-step examples to complete the analyses. The eleven sections of the book cover a wide range of statistical procedures including descriptive statistics, correlation and simple regression, t tests, one-way chi square, data transformations, multiple regression, analysis of variance, analysis of covariance, multivariate analysis of variance, factor analysis, and canonical correlation analysis. Designed to be used either as a stand-alone resource or as an accompaniment to a statistics course, the book offers a smooth path to statistical analysis with SAS Enterprise Guide for advanced undergraduate and beginning graduate students, as well as professionals in psychology, education, business, health, social work, sociology, and many other fields.
R is rapidly becoming the standard software for statistical analyses, graphical presentation of data, and programming in the natural, physical, social, and engineering sciences. Getting Started with R is now the go-to introductory guide for biologists wanting to learn how to use R in their research. It teaches readers how to import, explore, graph, and analyse data, while keeping them focused on their ultimate goals: clearly communicating their data in oral presentations, posters, papers, and reports. It provides a consistent workflow for using R that is simple, efficient, reliable, and reproducible. This second edition has been updated and expanded while retaining the concise and engaging nature of its predecessor, offering an accessible and fun introduction to the packages dplyr and ggplot2 for data manipulation and graphing. It expands the set of basic statistics considered in the first edition to include new examples of a simple regression, a one-way and a two-way ANOVA. Finally, it introduces a new chapter on the generalised linear model. Getting Started with R is suitable for undergraduates, graduate students, professional researchers, and practitioners in the biological sciences.
Studies of evolution at the molecular level have experienced phenomenal growth in the last few decades, due to rapid accumulation of genetic sequence data, improved computer hardware and software, and the development of sophisticated analytical methods. The flood of genomic data has generated an acute need for powerful statistical methods and efficient computational algorithms to enable their effective analysis and interpretation. Molecular Evolution: a statistical approach presents and explains modern statistical methods and computational algorithms for the comparative analysis of genetic sequence data in the fields of molecular evolution, molecular phylogenetics, statistical phylogeography, and comparative genomics. Written by an expert in the field, the book emphasizes conceptual understanding rather than mathematical proofs. The text is enlivened with numerous examples of real data analysis and numerical calculations to illustrate the theory, in addition to the working problems at the end of each chapter. The coverage of maximum likelihood and Bayesian methods are in particular up-to-date, comprehensive, and authoritative. This advanced textbook is aimed at graduate level students and professional researchers (both empiricists and theoreticians) in the fields of bioinformatics and computational biology, statistical genomics, evolutionary biology, molecular systematics, and population genetics. It will also be of relevance and use to a wider audience of applied statisticians, mathematicians, and computer scientists working in computational biology.
This book introduces the main theoretical findings related to copulas and shows how statistical modeling of multivariate continuous distributions using copulas can be carried out in the R statistical environment with the package copula (among others). Copulas are multivariate distribution functions with standard uniform univariate margins. They are increasingly applied to modeling dependence among random variables in fields such as risk management, actuarial science, insurance, finance, engineering, hydrology, climatology, and meteorology, to name a few. In the spirit of the Use R! series, each chapter combines key theoretical definitions or results with illustrations in R. Aimed at statisticians, actuaries, risk managers, engineers and environmental scientists wanting to learn about the theory and practice of copula modeling using R without an overwhelming amount of mathematics, the book can also be used for teaching a course on copula modeling.
Carry out a variety of advanced statistical analyses including generalized additive models, mixed effects models, multiple imputation, machine learning, and missing data techniques using R. Each chapter starts with conceptual background information about the techniques, includes multiple examples using R to achieve results, and concludes with a case study. Written by Matt and Joshua F. Wiley, Advanced R Statistical Programming and Data Models shows you how to conduct data analysis using the popular R language. You'll delve into the preconditions or hypothesis for various statistical tests and techniques and work through concrete examples using R for a variety of these next-level analytics. This is a must-have guide and reference on using and programming with the R language. What You'll Learn Conduct advanced analyses in R including: generalized linear models, generalized additive models, mixed effects models, machine learning, and parallel processing Carry out regression modeling using R data visualization, linear and advanced regression, additive models, survival / time to event analysis Handle machine learning using R including parallel processing, dimension reduction, and feature selection and classification Address missing data using multiple imputation in R Work on factor analysis, generalized linear mixed models, and modeling intraindividual variability Who This Book Is For Working professionals, researchers, or students who are familiar with R and basic statistical techniques such as linear regression and who want to learn how to use R to perform more advanced analytics. Particularly, researchers and data analysts in the social sciences may benefit from these techniques. Additionally, analysts who need parallel processing to speed up analytics are given proven code to reduce time to result(s).
This book constitutes the refereed proceedings of the 9th International Conference on Optimization and Applications, OPTIMA 2018, held in Petrovac, Montenegro, in October 2018.The 35 revised full papers and the one short paper presented were carefully reviewed and selected from 103 submissions. The papers are organized in topical sections on mathematical programming; combinatorial and discrete optimization; optimal control; optimization in economy, finance and social sciences; applications.
This easy-to-follow textbook/reference presents a concise introduction to mathematical analysis from an algorithmic point of view, with a particular focus on applications of analysis and aspects of mathematical modelling. The text describes the mathematical theory alongside the basic concepts and methods of numerical analysis, enriched by computer experiments using MATLAB, Python, Maple, and Java applets. This fully updated and expanded new edition also features an even greater number of programming exercises. Topics and features: describes the fundamental concepts in analysis, covering real and complex numbers, trigonometry, sequences and series, functions, derivatives, integrals, and curves; discusses important applications and advanced topics, such as fractals and L-systems, numerical integration, linear regression, and differential equations; presents tools from vector and matrix algebra in the appendices, together with further information on continuity; includes added material on hyperbolic functions, curves and surfaces in space, second-order differential equations, and the pendulum equation (NEW); contains experiments, exercises, definitions, and propositions throughout the text; supplies programming examples in Python, in addition to MATLAB (NEW); provides supplementary resources at an associated website, including Java applets, code source files, and links to interactive online learning material. Addressing the core needs of computer science students and researchers, this clearly written textbook is an essential resource for undergraduate-level courses on numerical analysis, and an ideal self-study tool for professionals seeking to enhance their analysis skills.
R is now the most widely used statistical package/language in university statistics departments and many research organisations. Its great advantages are that for many years it has been the leading-edge statistical package/language and that it can be freely downloaded from the R web site. Its cooperative development and open code also attracts many contributors meaning that the modelling and data analysis possibilities in R are much richer than in GLIM4, and so the R edition can be substantially more comprehensive than the GLIM4 edition of Statistical Modelling. This text provides a comprehensive treatment of the theory of statistical modelling in R with an emphasis on applications to practical problems and an expanded discussion of statistical theory. A wide range of case studies is provided, using the normal, binomial, Poisson, multinomial, gamma, exponential and Weibull distributions, making this book ideal for graduates and research students in applied statistics and a wide range of quantitative disciplines.
The YUIMA package is the first comprehensive R framework based on S4 classes and methods which allows for the simulation of stochastic differential equations driven by Wiener process, Levy processes or fractional Brownian motion, as well as CARMA, COGARCH, and Point processes. The package performs various central statistical analyses such as quasi maximum likelihood estimation, adaptive Bayes estimation, structural change point analysis, hypotheses testing, asynchronous covariance estimation, lead-lag estimation, LASSO model selection, and so on. YUIMA also supports stochastic numerical analysis by fast computation of the expected value of functionals of stochastic processes through automatic asymptotic expansion by means of the Malliavin calculus. All models can be multidimensional, multiparametric or non parametric.The book explains briefly the underlying theory for simulation and inference of several classes of stochastic processes and then presents both simulation experiments and applications to real data. Although these processes have been originally proposed in physics and more recently in finance, they are becoming popular also in biology due to the fact the time course experimental data are now available. The YUIMA package, available on CRAN, can be freely downloaded and this companion book will make the user able to start his or her analysis from the first page.
Berthold Heinrich stellt die mathematischen und zeichnerischen Grundlagen fur die Darstellung von Objekten im Raum auf kariertem Papier vor. Dabei prasentiert er auch die Nutzung von Software. In der Schule wird oft kariertes Papier als Raster zur Darstellung von Flachen und Koerpern genutzt. Allerdings werden, selbst in einigen Druckwerken, z.B. die entstehenden Ellipsen und Winkelboegen ungenau gezeichnet oder eine Kugelkontur falsch als Kreis dargestellt. Im vorliegenden Essential werden die korrekten Verfahren sowohl theoretisch als auch an konkreten Beispielen vorgestellt und koennen meist direkt umgesetzt werden. Einige aufwandigere Ablaufe stellt der Autor anschaulich an Beispielen dar.
Proof and Disproof in Formal Logic is a lively and entertaining
introduction to formal logic providing an excellent insight into
how a simple logic works. Formal logic allows you to check a
logical claim without considering what the claim means. This highly
abstracted idea is an essential and practical part of computer
science. The idea of a formal system-a collection of rules and
axioms, which define a universe of logical proofs-is what gives us
programming languages and modern-day programming. This book
concentrates on using logic as a tool: making and using formal
proofs and disproofs of particular logical claims. The logic it
uses-natural deduction-is very small and very simple; working with
it helps you see how large mathematical universes can be built on
small foundations. The book is divided into four parts:
This is the first book to present time series analysis using the SAS Enterprise Guide software. It includes some starting background and theory to various time series analysis techniques, and demonstrates the data analysis process and the final results via step-by-step extensive illustrations of the SAS Enterprise Guide software. This book is a practical guide to time series analyses in SAS Enterprise Guide, and is valuable resource that benefits a wide variety of sectors.
This textbook introduces the vast array of features and powerful mathematical functions of Mathematica using a multitude of clearly presented examples and worked-out problems. Each section starts with a description of a new topic and some basic examples. The author then demonstrates the use of new commands through three categories of problems - the first category highlights those essential parts of the text that demonstrate the use of new commands in Mathematica whilst solving each problem presented; - the second comprises problems that further demonstrate the use of commands previously introduced to tackle different situations; and - the third presents more challenging problems for further study. The intention is to enable the reader to learn from the codes, thus avoiding long and exhausting explanations. While based on a computer algebra course taught to undergraduate students of mathematics, science, engineering and finance, the book also includes chapters on calculus and solving equations, and graphics, thus covering all the basic topics in Mathematica. With its strong focus upon programming and problem solving, and an emphasis on using numerical problems that do not need any particular background in mathematics, this book is also ideal for self-study and as an introduction to researchers who wish to use Mathematica as a computational tool. This new edition has been extensively revised and updated, and includes new chapters with problems and worked examples.
Now in its third edition, this outstanding textbook explains everything you need to get started using MATLAB (R). It contains concise explanations of essential MATLAB commands, as well as easily understood instructions for using MATLAB's programming features, graphical capabilities, simulation models, and rich desktop interface. MATLAB 8 and its new user interface is treated extensively in the book. New features in this edition include: a complete treatment of MATLAB's publish feature; new material on MATLAB graphics, enabling the user to master quickly the various symbolic and numerical plotting routines; and a robust presentation of MuPAD (R) and how to use it as a stand-alone platform. The authors have also updated the text throughout, reworking examples and exploring new applications. The book is essential reading for beginners, occasional users and experienced users wishing to brush up their skills. Further resources are available from the authors' website at www-math.umd.edu/schol/a-guide-to-matlab.html.
This revised and updated edition focuses on constrained ordination (RDA, CCA), variation partitioning and the use of permutation tests of statistical hypotheses about multivariate data. Both classification and modern regression methods (GLM, GAM, loess) are reviewed and species functional traits and spatial structures analysed. Nine case studies of varying difficulty help to illustrate the suggested analytical methods, using the latest version of Canoco 5. All studies utilise descriptive and manipulative approaches, and are supported by data sets and project files available from the book website: http: //regent.prf.jcu.cz/maed2/. Written primarily for community ecologists needing to analyse data resulting from field observations and experiments, this book is a valuable resource to students and researchers dealing with both simple and complex ecological problems, such as the variation of biotic communities with environmental conditions or their response to experimental manipulation
Some probability problems are so difficult that they stump the smartest mathematicians. But even the hardest of these problems can often be solved with a computer and a Monte Carlo simulation, in which a random-number generator simulates a physical process, such as a million rolls of a pair of dice. This is what "Digital Dice" is all about: how to get numerical answers to difficult probability problems without having to solve complicated mathematical equations. Popular-math writer Paul Nahin challenges readers to solve twenty-one difficult but fun problems, from determining the odds of coin-flipping games to figuring out the behavior of elevators. Problems build from relatively easy (deciding whether a dishwasher who breaks most of the dishes at a restaurant during a given week is clumsy or just the victim of randomness) to the very difficult (tackling branching processes of the kind that had to be solved by Manhattan Project mathematician Stanislaw Ulam). In his characteristic style, Nahin brings the problems to life with interesting and odd historical anecdotes. Readers learn, for example, not just how to determine the optimal stopping point in any selection process but that astronomer Johannes Kepler selected his second wife by interviewing eleven women. The book shows readers how to write elementary computer codes using any common programming language, and provides solutions and line-by-line walk-throughs of a MATLAB code for each problem. "Digital Dice" will appeal to anyone who enjoys popular math or computer science. In a new preface, Nahin wittily addresses some of the responses he received to the first edition.
Das UEbungsbuch stellt eine ausgesuchte Sammlung von Problemstellungen und Loesungen bereit, die durch eine Formelsammlung mit den wichtigsten im Buch verwendeten Formeln abgerundet wird. Zusatzlich wird ein umfangreiches Set von Programmen in R zur Verfugung gestellt, die zur Aufgabenstellung und Loesung geschrieben wurden. Der Anhang des Buches beinhaltet daher auch eine kurze Einfuhrung in die Statistik-Software R. Der Inhalt, Organisation inklusive Kapitelaufteilung orientiert sich an dem bei Springer erschienenem Werk "Statistik fur Bachelor- und Masterstudenten: Eine Einfuhrung fur Wirtschafts- und Sozialwissenschaftler"
Der Leser wird von der Untersuchung und Darstellung empirisch vorgefundener Daten bis zu Planung und Auswertung eigener statistischer Versuchsplane durch dieses Buch begleitet. Es wird dabei ganz bewusst auf praktisch relevante und bewahrte Methoden Bezug genommen und auf weiterfuhrende wissenschaftliche Beschreibungen verzichtet. Praktisch relevante Methoden werden im Zusammenhang dargestellt."
The book presents a comprehensive vision of the impact of ICT on the contemporary city, heritage, public spaces and meta-cities on both urban and metropolitan scales, not only in producing innovative perspectives but also related to newly discovered scientific methods, which can be used to stimulate the emerging reciprocal relations between cities and information technologies. Using the principles established by multi-disciplinary interventions as examples and then expanding on them, this book demonstrates how by using ICT and new devices, metropolises can be organized for a future that preserves the historic nucleus of the city and the environment while preparing the necessary expansion of transportation, housing and industrial facilities.
Anlasslich des 25jahrigen Jubilaums des Deutschen Krebsforschungszentrums (DKFZ) in Heidelberg geben die Autoren einen Uberblick uber Institutionen und Organisationsformen der Krebsforschung in Deutschland, speziell der Vorgeschichte und Geschichte des DKFZ seit Anfang des 20. Jahrhunderts."
Si tratta di un'opera introduttiva al campionamento da popolazioni finite. Si ritiene che un'opera su questo argomento sia adatta alle lauree triennali, ma contiene anche una parte di materiale avanzato da utilizzare per lauree specialistiche. L'opera e ricca di esempi, ed e accessibile anche a chi abbia seguito un corso elementare di statistica e probabilita, del tipo di quelli impartiti in lauree triennali di economia. Il volume e adatto non solo a studenti di corsi di laurea in statistica, ma anche a studenti di altre facolta che vogliano usare i metodi di campionamento con taglio elementare e applicativo senza rinunciare ad un modicum di teoria."
Computeralgebra-Systeme spielen in Zukunft im Mathematikunterricht der Sekundarstufe II eine wichtige Rolle. Dieses Buch ist auf den Schulstoff der Sekundarstufe II ausgerichtet und richtet sich an Lehramtsstudenten und interessierte Lehrer, die sich in das Programm DERIVE einarbeiten mochten, um es dann im Unterricht, insbesondere in Leistungskursen Mathematik, zu verwenden."
The ability to summarise data, compare models and apply computer-based analysis tools are vital skills necessary for studying and working in the physical sciences. This textbook supports undergraduate students as they develop and enhance these skills. Introducing data analysis techniques, this textbook pays particular attention to the internationally recognised guidelines for calculating and expressing measurement uncertainty. This new edition has been revised to incorporate Excel (R) 2010. It also provides a practical approach to fitting models to data using non-linear least squares, a powerful technique which can be applied to many types of model. Worked examples using actual experimental data help students understand how the calculations apply to real situations. Over 200 in-text exercises and end-of-chapter problems give students the opportunity to use the techniques themselves and gain confidence in applying them. Answers to the exercises and problems are given at the end of the book. |
You may like...
Jump into JMP Scripting, Second Edition…
Wendy Murphrey, Rosemary Lucas
Hardcover
R1,530
Discovery Miles 15 300
Portfolio and Investment Analysis with…
John B. Guerard, Ziwei Wang, …
Hardcover
R2,322
Discovery Miles 23 220
Neutrosophic Sets in Decision Analysis…
Mohamed Abdel-Basset, Florentin Smarandache
Hardcover
R6,641
Discovery Miles 66 410
Mathematical Modeling for Smart…
Debabrata Samanta, Debabrata Singh
Hardcover
R11,427
Discovery Miles 114 270
Essential Java for Scientists and…
Brian Hahn, Katherine Malan
Paperback
R1,266
Discovery Miles 12 660
|