![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer software packages > Other software packages
This volume features original contributions and invited review articles on mathematical statistics, statistical simulation and experimental design. The selected peer-reviewed contributions originate from the 8th International Workshop on Simulation held in Vienna in 2015. The book is intended for mathematical statisticians, Ph.D. students and statisticians working in medicine, engineering, pharmacy, psychology, agriculture and other related fields. The International Workshops on Simulation are devoted to statistical techniques in stochastic simulation, data collection, design of scientific experiments and studies representing broad areas of interest. The first 6 workshops took place in St. Petersburg, Russia, in 1994 - 2009 and the 7th workshop was held in Rimini, Italy, in 2013.
Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs. The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression, L1 and q-quantile regression, regression in a spatial domain, ridge regression, semiparametric regression, nonlinear least squares, and time-series regression issues. For most of the regression methods, the author includes SAS procedure code, enabling readers to promptly perform their own regression runs. A Comprehensive, Accessible Source on Regression Methodology and ModelingRequiring only basic knowledge of statistics and calculus, this book discusses how to use regression analysis for decision making and problem solving. It shows readers the power and diversity of regression techniques without overwhelming them with calculations.
Thoroughly revised and updated, The Art of Modeling in Science and Engineering with Mathematica (R), Second Edition explores the mathematical tools and procedures used in modeling based on the laws of conservation of mass, energy, momentum, and electrical charge. The authors have culled and consolidated the best from the first edition and expanded the range of applied examples to reach a wider audience. The text proceeds, in measured steps, from simple models of real-world problems at the algebraic and ordinary differential equations (ODE) levels to more sophisticated models requiring partial differential equations. The traditional solution methods are supplemented with Mathematica , which is used throughout the text to arrive at solutions for many of the problems presented. The text is enlivened with a host of illustrations and practice problems drawn from classical and contemporary sources. They range from Thomson's famous experiment to determine e/m and Euler's model for the buckling of a strut to an analysis of the propagation of emissions and the performance of wind turbines. The mathematical tools required are first explained in separate chapters and then carried along throughout the text to solve and analyze the models. Commentaries at the end of each illustration draw attention to the pitfalls to be avoided and, perhaps most important, alert the reader to unexpected results that defy conventional wisdom. These features and more make the book the perfect tool for resolving three common difficulties: the proper choice of model, the absence of precise solutions, and the need to make suitable simplifying assumptions and approximations. The book covers a wide range of physical processes and phenomena drawn from various disciplines and clearly illuminates the link between the physical system being modeled and the mathematical expression that results.
The richly illustrated Interactive Web-Based Data Visualization with R, plotly, and shiny focuses on the process of programming interactive web graphics for multidimensional data analysis. It is written for the data analyst who wants to leverage the capabilities of interactive web graphics without having to learn web programming. Through many R code examples, you will learn how to tap the extensive functionality of these tools to enhance the presentation and exploration of data. By mastering these concepts and tools, you will impress your colleagues with your ability to quickly generate more informative, engaging, and reproducible interactive graphics using free and open source software that you can share over email, export to pdf, and more. Key Features: Convert static ggplot2 graphics to an interactive web-based form Link, animate, and arrange multiple plots in standalone HTML from R Embed, modify, and respond to plotly graphics in a shiny app Learn best practices for visualizing continuous, discrete, and multivariate data Learn numerous ways to visualize geo-spatial data This book makes heavy use of plotly for graphical rendering, but you will also learn about other R packages that support different phases of a data science workflow, such as tidyr, dplyr, and tidyverse. Along the way, you will gain insight into best practices for visualization of high-dimensional data, statistical graphics, and graphical perception. The printed book is complemented by an interactive website where readers can view movies demonstrating the examples and interact with graphics.
Financial, Macro and Micro Econometrics Using R, Volume 42, provides state-of-the-art information on important topics in econometrics, including multivariate GARCH, stochastic frontiers, fractional responses, specification testing and model selection, exogeneity testing, causal analysis and forecasting, GMM models, asset bubbles and crises, corporate investments, classification, forecasting, nonstandard problems, cointegration, financial market jumps and co-jumps, among other topics.
Zoom into the new world of remote collaboration While a worldwide pandemic may have started the Zoom revolution, the convenience of remote meetings is here to stay. Zoom For Dummies takes you from creating meetings on the platform to running global webinars. Along the way you'll learn how to expand your remote collaboration options, record meetings for future review, and even make scheduling a meeting through your other apps a one-click process. Take in all the advice or zoom to the info you need - it's all there! Discover how to set up meetings Share screens and files Keep your meetings secure Add Zoom hardware to your office Get tips for using Zoom as a social tool Award-winning author Phil Simon takes you beyond setting up and sharing links for meetings to show how Zoom can transform your organization and the way you work.
This book systematically addresses the design and analysis of efficient techniques for independent random sampling. Both general-purpose approaches, which can be used to generate samples from arbitrary probability distributions, and tailored techniques, designed to efficiently address common real-world practical problems, are introduced and discussed in detail. In turn, the monograph presents fundamental results and methodologies in the field, elaborating and developing them into the latest techniques. The theory and methods are illustrated with a varied collection of examples, which are discussed in detail in the text and supplemented with ready-to-run computer code. The main problem addressed in the book is how to generate independent random samples from an arbitrary probability distribution with the weakest possible constraints or assumptions in a form suitable for practical implementation. The authors review the fundamental results and methods in the field, address the latest methods, and emphasize the links and interplay between ostensibly diverse techniques.
This comprehensive text covers the use of SAS for epidemiology and public health research. Developed with students in mind and from their feedback, the text addresses this material in a straightforward manner with a multitude of examples. It is directly applicable to students and researchers in the fields of public health, biostatistics and epidemiology. Through a hands on approach to the use of SAS for a broad number of epidemiologic analyses, readers learn techniques for data entry and cleaning, categorical analysis, ANOVA, and linear regression and much more. Exercises utilizing real-world data sets are featured throughout the book. SAS screen shots demonstrate the steps for successful programming. SAS (Statistical Analysis System) is an integrated system of software products provided by the SAS institute, which is headquartered in California. It provides programmers and statisticians the ability to engage in many sophisticated statistical analyses and data retrieval and mining exercises. SAS is widely used in the fields of epidemiology and public healthresearch, predominately due to its ability to reliably analyze very large administrative data sets, as well as more commonly encountered clinical trial and observational research data. "
"If mathematical modeling is the process of turning real phenomena into mathematical abstractions, then numerical computation is largely about the transformation from abstract mathematics to concrete reality. Many science and engineering disciplines have long benefited from the tremendous value of the correspondence between quantitative information and mathematical manipulation." -from the Preface Fundamentals of Numerical Computation is an advanced undergraduate-level introduction to the mathematics and use of algorithms for the fundamental problems of numerical computation: linear algebra, finding roots, approximating data and functions, and solving differential equations. The book is organized with simpler methods in the first half and more advanced methods in the second half, allowing use for either a single course or a sequence of two courses. The authors take readers from basic to advanced methods, illustrating them with over 200 self-contained MATLAB functions and examples designed for those with no prior MATLAB experience. Although the text provides many examples, exercises, and illustrations, the aim of the authors is not to provide a cookbook per se, but rather an exploration of the principles of cooking. Professors Driscoll and Braun have developed an online resource that includes well-tested materials related to every chapter. Among these materials are lecture-related slides and videos, ideas for student projects, laboratory exercises, computational examples and scripts, and all the functions presented in the book.
This book explores inductive inference using the minimum message length (MML) principle, a Bayesian method which is a realisation of Ockham's Razor based on information theory. Accompanied by a library of software, the book can assist an applications programmer, student or researcher in the fields of data analysis and machine learning to write computer programs based upon this principle. MML inference has been around for 50 years and yet only one highly technical book has been written about the subject. The majority of research in the field has been backed by specialised one-off programs but this book includes a library of general MML-based software, in Java. The Java source code is available under the GNU GPL open-source license. The software library is documented using Javadoc which produces extensive cross referenced HTML manual pages. Every probability distribution and statistical model that is described in the book is implemented and documented in the software library. The library may contain a component that directly solves a reader's inference problem, or contain components that can be put together to solve the problem, or provide a standard interface under which a new component can be written to solve the problem. This book will be of interest to application developers in the fields of machine learning and statistics as well as academics, postdocs, programmers and data scientists. It could also be used by third year or fourth year undergraduate or postgraduate students.
This Festschrift in honour of Paul Deheuvels' 65th birthday compiles recent research results in the area between mathematical statistics and probability theory with a special emphasis on limit theorems. The book brings together contributions from invited international experts to provide an up-to-date survey of the field. Written in textbook style, this collection of original material addresses researchers, PhD and advanced Master students with a solid grasp of mathematical statistics and probability theory.
This book brings together two major trends: data science and blockchains. It is one of the first books to systematically cover the analytics aspects of blockchains, with the goal of linking traditional data mining research communities with novel data sources. Data science and big data technologies can be considered cornerstones of the data-driven digital transformation of organizations and society. The concept of blockchain is predicted to enable and spark transformation on par with that associated with the invention of the Internet. Cryptocurrencies are the first successful use case of highly distributed blockchains, like the world wide web was to the Internet. The book takes the reader through basic data exploration topics, proceeding systematically, method by method, through supervised and unsupervised learning approaches and information visualization techniques, all the way to understanding the blockchain data from the network science perspective. Chapters introduce the cryptocurrency blockchain data model and methods to explore it using structured query language, association rules, clustering, classification, visualization, and network science. Each chapter introduces basic concepts, presents examples with real cryptocurrency blockchain data and offers exercises and questions for further discussion. Such an approach intends to serve as a good starting point for undergraduate and graduate students to learn data science topics using cryptocurrency blockchain examples. It is also aimed at researchers and analysts who already possess good analytical and data skills, but who do not yet have the specific knowledge to tackle analytic questions about blockchain transactions. The readers improve their knowledge about the essential data science techniques in order to turn mere transactional information into social, economic, and business insights.
This book covers the MATLAB syntax and the environment suitable for someone with no programming background. The first four chapters present information on basic MATLAB programming including computing terminology, MATLAB specific syntax and control structures, operators, arrays and matrices. The next cluster covers grouping data, working with files, making images, creating graphical user interfaces, experimenting with sound, and the debugging environment. The final three chapters contain case studies on using MATLAB and other tools and devices (e.g., Arduino, Linux, Git, Mex, etc.) important for basic programming knowledge. Companion files with code and 4 color figures are on the disc or available from the publisher. Features: Covers the MATLAB syntax and the environment, suitable for someone with no programming background Numerous examples, projects, and practical applications enhance understanding of subjects under discussion with over 100 MATLAB scripts and functions Includes companion files with code and 4 color figures from the text (on the disc or available from the publisher)
This text is intended for a broad audience as both an introduction to predictive models as well as a guide to applying them. Non- mathematical readers will appreciate the intuitive explanations of the techniques while an emphasis on problem-solving with real data across a wide variety of applications will aid practitioners who wish to extend their expertise. Readers should have knowledge of basic statistical ideas, such as correlation and linear regression analysis. While the text is biased against complex equations, a mathematical background is needed for advanced topics. Dr. Kuhn is a Director of Non-Clinical Statistics at Pfizer Global R&D in Groton Connecticut. He has been applying predictive models in the pharmaceutical and diagnostic industries for over 15 years and is the author of a number of R packages. Dr. Johnson has more than a decade of statistical consulting and predictive modeling experience in pharmaceutical research and development. He is a co-founder of Arbor Analytics, a firm specializing in predictive modeling and is a former Director of Statistics at Pfizer Global R&D. His scholarly work centers on the application and development of statistical methodology and learning algorithms. Applied Predictive Modeling covers the overall predictive modeling process, beginning with the crucial steps of data preprocessing, data splitting and foundations of model tuning. The text then provides intuitive explanations of numerous common and modern regression and classification techniques, always with an emphasis on illustrating and solving real data problems. Addressing practical concerns extends beyond model fitting to topics such as handling class imbalance, selecting predictors, and pinpointing causes of poor model performance-all of which are problems that occur frequently in practice. The text illustrates all parts of the modeling process through many hands-on, real-life examples. And every chapter contains extensive R code f
The Second Bayesian Young Statisticians Meeting (BAYSM 2014) and the research presented here facilitate connections among researchers using Bayesian Statistics by providing a forum for the development and exchange of ideas. WU Vienna University of Business and Economics hosted BAYSM 2014 from September 18th to the 19th. The guidance of renowned plenary lecturers and senior discussants is a critical part of the meeting and this volume, which follows publication of contributions from BAYSM 2013. The meeting's scientific program reflected the variety of fields in which Bayesian methods are currently employed or could be introduced in the future. Three brilliant keynote lectures by Chris Holmes (University of Oxford), Christian Robert (Universite Paris-Dauphine), and Mike West (Duke University), were complemented by 24 plenary talks covering the major topics Dynamic Models, Applications, Bayesian Nonparametrics, Biostatistics, Bayesian Methods in Economics, and Models and Methods, as well as a lively poster session with 30 contributions. Selected contributions have been drawn from the conference for this book. All contributions in this volume are peer-reviewed and share original research in Bayesian computation, application, and theory.
With the advancement of statistical methodology inextricably linked to the use of computers, new methodological ideas must be translated into usable code and then numerically evaluated relative to competing procedures. In response to this, Statistical Computing in C++ and R concentrates on the writing of code rather than the development and study of numerical algorithms per se. The book discusses code development in C++ and R and the use of these symbiotic languages in unison. It emphasizes that each offers distinct features that, when used in tandem, can take code writing beyond what can be obtained from either language alone. The text begins with some basics of object-oriented languages, followed by a "boot-camp" on the use of C++ and R. The authors then discuss code development for the solution of specific computational problems that are relevant to statistics including optimization, numerical linear algebra, and random number generation. Later chapters introduce abstract data structures (ADTs) and parallel computing concepts. The appendices cover R and UNIX Shell programming. Features Includes numerous student exercises ranging from elementary to challenging Integrates both C++ and R for the solution of statistical computing problems Uses C++ code in R and R functions in C++ programs Provides downloadable programs, available from the authors' website The translation of a mathematical problem into its computational analog (or analogs) is a skill that must be learned, like any other, by actively solving relevant problems. The text reveals the basic principles of algorithmic thinking essential to the modern statistician as well as the fundamental skill of communicating with a computer through the use of the computer languages C++ and R. The book lays the foundation for original code development in a research environment.
This book is a timely and critical introduction for those interested in what data science is (and isn't), and how it should be applied. The language is conversational and the content is accessible for readers without a quantitative or computational background; but, at the same time, it is also a practical overview of the field for the more technical readers. The overarching goal is to demystify the field and teach the reader how to develop an analytical mindset instead of following recipes. The book takes the scientist's approach of focusing on asking the right question at every step as this is the single most important factor contributing to the success of a data science project. Upon finishing this book, the reader should be asking more questions than I have answered. This book is, therefore, a practising scientist's approach to explaining data science through questions and examples.
There is no shortage of incentives to study and reduce poverty in our societies. Poverty is studied in economics and political sciences, and population surveys are an important source of information about it. The design and analysis of such surveys is principally a statistical subject matter and the computer is essential for their data compilation and processing. Focusing on The European Union Statistics on Income and Living Conditions (EU-SILC), a program of annual national surveys which collect data related to poverty and social exclusion, Statistical Studies of Income, Poverty and Inequality in Europe: Computing and Graphics in R presents a set of statistical analyses pertinent to the general goals of EU-SILC. The contents of the volume are biased toward computing and statistics, with reduced attention to economics, political and other social sciences. The emphasis is on methods and procedures as opposed to results, because the data from annual surveys made available since publication and in the near future will degrade the novelty of the data used and the results derived in this volume. The aim of this volume is not to propose specific methods of analysis, but to open up the analytical agenda and address the aspects of the key definitions in the subject of poverty assessment that entail nontrivial elements of arbitrariness. The presented methods do not exhaust the range of analyses suitable for EU-SILC, but will stimulate the search for new methods and adaptation of established methods that cater to the identified purposes.
This research monograph utilizes exact and Monte Carlo permutation statistical methods to generate probability values and measures of effect size for a variety of measures of association. Association is broadly defined to include measures of correlation for two interval-level variables, measures of association for two nominal-level variables or two ordinal-level variables, and measures of agreement for two nominal-level or two ordinal-level variables. Additionally, measures of association for mixtures of the three levels of measurement are considered: nominal-ordinal, nominal-interval, and ordinal-interval measures. Numerous comparisons of permutation and classical statistical methods are presented. Unlike classical statistical methods, permutation statistical methods do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This book takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field. This topic is relatively new in that it took modern computing power to make permutation methods available to those working in mainstream research. Written for a statistically informed audience, it is particularly useful for teachers of statistics, practicing statisticians, applied statisticians, and quantitative graduate students in fields such as psychology, medical research, epidemiology, public health, and biology. It can also serve as a textbook in graduate courses in subjects like statistics, psychology, and biology.
Compositional Data Analysis in Practice is a user-oriented practical guide to the analysis of data with the property of a constant sum, for example percentages adding up to 100%. Compositional data can give misleading results if regular statistical methods are applied, and are best analysed by first transforming them to logarithms of ratios. This book explains how this transformation affects the analysis, results and interpretation of this very special type of data. All aspects of compositional data analysis are considered: visualization, modelling, dimension-reduction, clustering and variable selection, with many examples in the fields of food science, archaeology, sociology and biochemistry, and a final chapter containing a complete case study using fatty acid compositions in ecology. The applicability of these methods extends to other fields such as linguistics, geochemistry, marketing, economics and finance. R Software The following repository contains data files and R scripts from the book https://github.com/michaelgreenacre/CODAinPractice. The R package easyCODA, which accompanies this book, is available on CRAN -- note that you should have version 0.25 or higher. The latest version of the package will always be available on R-Forge and can be installed from R with this instruction: install.packages("easyCODA", repos="http://R-Forge.R-project.org").
"R for Business Analytics" looks at some of the most common tasks performed by business analysts and helps the user navigate the wealth of information in R and its 4000 packages. With this information the reader can select the packages that can help process the analytical tasks with minimum effort and maximum usefulness. The use of Graphical User Interfaces (GUI) is emphasized in this book to further cut downand bend the famous learning curve in learning R. This book is aimed to help you kick-start with analytics including chapters on data visualization, code examples on web analytics and social media analytics, clustering, regression models, text mining, data mining models and forecasting. The book tries to expose the reader to a breadth of business analytics topics without burying the user in needless depth. The included references and links allow the reader to pursue business analytics topics. This book is aimed at business analysts with basic programming skills for using R for Business Analytics. Note the scope of the book is neither statistical theory nor graduate level research for statistics, but rather it is for business analytics practitioners. Business analytics (BA) refers to the field ofexploration and investigation of data generated by businesses. Business Intelligence (BI) is the seamless dissemination of information through the organization, which primarily involves business metrics both past and current for the use of decision support in businesses. Data Mining (DM) is the process of discovering new patterns from large data using algorithms and statistical methods. To differentiate between the three, BI is mostly current reports, BA is models to predict and strategizeand DM matches patterns in big data. The R statistical software is the fastest growing analytics platform in the world, and is established in both academia and corporations for robustness, reliability and accuracy. The book utilizes Albert Einstein s famous remarks on making things as simple as possible, but no simpler. This book will blow the last remaining doubts in your mind about using R in your business environment. Even non-technical users will enjoy the easy-to-use examples. The interviews with creators and corporate users of R make the book very readable. The author firmly believes Isaac Asimovwas a better writer in spreading science than any textbook or journal author."
This book puts military doctrine into a wider perspective, drawing on military history, philosophy, and political science. Military doctrines are institutional beliefs about what works in war; given the trauma of 9/11 and the ensuing 'War on Terror', serious divergences over what the message of the 'new' military doctrine ought to be were expected around the world. However, such questions are often drowned in ferocious meta-doctrinal disagreements. What is a doctrine, after all? This book provides a theoretical understanding of such questions. Divided into three parts, the author investigates the historical roots of military doctrine and explores its growth and expansion until the present day, and goes on to analyse the main characteristics of a military doctrine. Using a multidisciplinary approach, the book concludes that doctrine can be utilized in three key ways: as a tool of command, as a tool of change, and as a tool of education. This book will be of much interest to students of military studies, civil-military relations, strategic studies, and war studies, as well as to students in professional military education.
Inspired by the author's need for practical guidance in the processes of data analysis, "A Practical Guide to Scientific Data Analysis" has been written as a statistical companion for the working scientist. This handbook of data analysis with worked examples focuses on the application of mathematical and statistical techniques and the interpretation of their results. Covering the most common statistical methods for examining and exploring relationships in data, the text includes extensive examples from a variety of scientific disciplines. The chapters are organised logically, from planning an experiment, through examining and displaying the data, to constructing quantitative models. Each chapter is intended to stand alone so that casual users can refer to the section that is most appropriate to their problem. Written by a highly qualified and internationally respected author this text: Presents statistics for the non-statisticianExplains a variety of methods to extract information from dataDescribes the application of statistical methods to the design of "performance chemicals"Emphasises the application of statistical techniques and the interpretation of their results Of practical use to chemists, biochemists, pharmacists, biologists and researchers from many other scientific disciplines in both industry and academia.
Our consumption of raw materials and energy has reached unprecedented levels which are continuing to increase at a steady rate due to the economic emergence of many countries and the development of new technologies. Metal and cement usage has doubled since the beginning of the 21st Century and this production, between now and 2050, will be equivalent to that produced since the beginning of humanity. It is in this context that the transition to low-carbon and renewable energies is taking place, which involves profound changes to the existing global energy system. This book addresses these different aspects and attempts to estimate first-order requirements for cement, steel, copper, aluminum and energy for different power generation technologies, and for three types of energy scenarios. Some dynamic modeling approaches are proposed to assess the needs and likely evolution of primary production and recycling. The link between production and primary reserves, recycling and stocks of end-of-life products, production costs, incomes and prices using a prey-predator dynamic is discussed.
Computer science graduates often find software engineering knowledge and skills are more in demand after they join the industry. However, given the lecture-based curriculum present in academia, it is not an easy undertaking to deliver industry-standard knowledge and skills in a software engineering classroom as such lectures hardly engage or convince students. Overcoming Challenges in Software Engineering Education: Delivering Non-Technical Knowledge and Skills combines recent advances and best practices to improve the curriculum of software engineering education. This book is an essential reference source for researchers and educators seeking to bridge the gap between industry expectations and what academia can provide in software engineering education. |
You may like...
Contemporary Management of Metastatic…
Aslam Ejaz, Timothy M. Pawlik
Paperback
R3,237
Discovery Miles 32 370
Data Communication and Computer Networks…
Jill West, Curt M. White
Paperback
Database Systems - Design…
Carlos Coronel, Steven Morris
Paperback
Management and Engineering of Critical…
Bedir Tekinerdogan, Mehmet Aksit, …
Paperback
R2,954
Discovery Miles 29 540
29th European Symposium on Computer…
Anton A Kiss, Edwin Zondervan, …
Hardcover
R11,317
Discovery Miles 113 170
|