![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages
Reproducible Finance with R: Code Flows and Shiny Apps for Portfolio Analysis is a unique introduction to data science for investment management that explores the three major R/finance coding paradigms, emphasizes data visualization, and explains how to build a cohesive suite of functioning Shiny applications. The full source code, asset price data and live Shiny applications are available at reproduciblefinance.com. The ideal reader works in finance or wants to work in finance and has a desire to learn R code and Shiny through simple, yet practical real-world examples. The book begins with the first step in data science: importing and wrangling data, which in the investment context means importing asset prices, converting to returns, and constructing a portfolio. The next section covers risk and tackles descriptive statistics such as standard deviation, skewness, kurtosis, and their rolling histories. The third section focuses on portfolio theory, analyzing the Sharpe Ratio, CAPM, and Fama French models. The book concludes with applications for finding individual asset contribution to risk and for running Monte Carlo simulations. For each of these tasks, the three major coding paradigms are explored and the work is wrapped into interactive Shiny dashboards.
This edited volume on the latest advances in data science covers a wide range of topics in the context of data analysis and classification. In particular, it includes contributions on classification methods for high-dimensional data, clustering methods, multivariate statistical methods, and various applications. The book gathers a selection of peer-reviewed contributions presented at the Fifteenth Conference of the International Federation of Classification Societies (IFCS2015), which was hosted by the Alma Mater Studiorum, University of Bologna, from July 5 to 8, 2015.
When the pressure is on to resolve an elusive software or hardware glitch, what's needed is a cool head courtesy of a set of rules guaranteed to work on any system, in any circumstance. Written in a frank but engaging style, this book provides simple, foolproof principles guaranteed to help find any bug quickly. Recognized tech expert and author David Agans changes the way you think about debugging, making those pesky problems suddenly much easier to find and fix. Agans identifies nine simple, practical rules that are applicable to any software application or hardware system, which can help detect any bug, no matter how tricky or obscure. Illustrating the rules with real-life bug-detection war stories, Debugging shows you how to: Understand the system: how perceiving the ""roadmap"" can hasten your journey Quit thinking and look: when hands-on investigation can't be avoided Isolate critical factors: why changing one element at a time can be an essential tool Keep an audit trail: how keeping a record of the debugging process can win the day Whether the system or program you're working on has been designed wrong, built wrong, or used wrong, Debugging helps you think correctly about bugs, so the problems virtually reveal themselves.
This book offers an original and broad exploration of the fundamental methods in Clustering and Combinatorial Data Analysis, presenting new formulations and ideas within this very active field. With extensive introductions, formal and mathematical developments and real case studies, this book provides readers with a deeper understanding of the mutual relationships between these methods, which are clearly expressed with respect to three facets: logical, combinatorial and statistical. Using relational mathematical representation, all types of data structures can be handled in precise and unified ways which the author highlights in three stages: Clustering a set of descriptive attributes Clustering a set of objects or a set of object categories Establishing correspondence between these two dual clusterings Tools for interpreting the reasons of a given cluster or clustering are also included. Foundations and Methods in Combinatorial and Statistical Data Analysis and Clustering will be a valuable resource for students and researchers who are interested in the areas of Data Analysis, Clustering, Data Mining and Knowledge Discovery.
This tutorial teaches you how to use the statistical programming language R to develop a business case simulation and analysis. It presents a methodology for conducting business case analysis that minimizes decision delay by focusing stakeholders on what matters most and suggests pathways for minimizing the risk in strategic and capital allocation decisions. Business case analysis, often conducted in spreadsheets, exposes decision makers to additional risks that arise just from the use of the spreadsheet environment. R has become one of the most widely used tools for reproducible quantitative analysis, and analysts fluent in this language are in high demand. The R language, traditionally used for statistical analysis, provides a more explicit, flexible, and extensible environment than spreadsheets for conducting business case analysis. The main tutorial follows the case in which a chemical manufacturing company considers constructing a chemical reactor and production facility to bring a new compound to market. There are numerous uncertainties and risks involved, including the possibility that a competitor brings a similar product online. The company must determine the value of making the decision to move forward and where they might prioritize their attention to make a more informed and robust decision. While the example used is a chemical company, the analysis structure it presents can be applied to just about any business decision, from IT projects to new product development to commercial real estate. The supporting tutorials include the perspective of the founder of a professional service firm who wants to grow his business and a member of a strategic planning group in a biomedical device company who wants to know how much to budget in order to refine the quality of information about critical uncertainties that might affect the value of a chosen product development pathway. What You'll Learn Set up a business case abstraction in an influence diagram to communicate the essence of the problem to other stakeholders Model the inherent uncertainties in the problem with Monte Carlo simulation using the R language Communicate the results graphically Draw appropriate insights from the results Develop creative decision strategies for thorough opportunity cost analysis Calculate the value of information on critical uncertainties between competing decision strategies to set the budget for deeper data analysis Construct appropriate information to satisfy the parameters for the Monte Carlo simulation when little or no empirical data are available Who This Book Is For Financial analysts, data practitioners, and risk/business professionals; also appropriate for graduate level finance, business, or data science students
This volume collects latest methodological and applied contributions on functional, high-dimensional and other complex data, related statistical models and tools as well as on operator-based statistics. It contains selected and refereed contributions presented at the Fourth International Workshop on Functional and Operatorial Statistics (IWFOS 2017) held in A Coruna, Spain, from 15 to 17 June 2017. The series of IWFOS workshops was initiated by the Working Group on Functional and Operatorial Statistics at the University of Toulouse in 2008. Since then, many of the major advances in functional statistics and related fields have been periodically presented and discussed at the IWFOS workshops.
The book provides a description of the process of health economic evaluation and modelling for cost-effectiveness analysis, particularly from the perspective of a Bayesian statistical approach. Some relevant theory and introductory concepts are presented using practical examples and two running case studies. The book also describes in detail how to perform health economic evaluations using the R package BCEA (Bayesian Cost-Effectiveness Analysis). BCEA can be used to post-process the results of a Bayesian cost-effectiveness model and perform advanced analyses producing standardised and highly customisable outputs. It presents all the features of the package, including its many functions and their practical application, as well as its user-friendly web interface. The book is a valuable resource for statisticians and practitioners working in the field of health economics wanting to simplify and standardise their workflow, for example in the preparation of dossiers in support of marketing authorisation, or academic and scientific publications.
This book offers a concise and gentle introduction to finite element programming in Python based on the popular FEniCS software library. Using a series of examples, including the Poisson equation, the equations of linear elasticity, the incompressible Navier-Stokes equations, and systems of nonlinear advection-diffusion-reaction equations, it guides readers through the essential steps to quickly solving a PDE in FEniCS, such as how to define a finite variational problem, how to set boundary conditions, how to solve linear and nonlinear systems, and how to visualize solutions and structure finite element Python programs. This book is open access under a CC BY license.
This guide for practicing statisticians, data scientists, and R users and programmers will teach the essentials of preprocessing: data leveraging the R programming language to easily and quickly turn noisy data into usable pieces of information. Data wrangling, which is also commonly referred to as data munging, transformation, manipulation, janitor work, etc., can be a painstakingly laborious process. Roughly 80% of data analysis is spent on cleaning and preparing data; however, being a prerequisite to the rest of the data analysis workflow (visualization, analysis, reporting), it is essential that one become fluent and efficient in data wrangling techniques. This book will guide the user through the data wrangling process via a step-by-step tutorial approach and provide a solid foundation for working with data in R. The author's goal is to teach the user how to easily wrangle data in order to spend more time on understanding the content of the data. By the end of the book, the user will have learned: How to work with different types of data such as numerics, characters, regular expressions, factors, and dates The difference between different data structures and how to create, add additional components to, and subset each data structure How to acquire and parse data from locations previously inaccessible How to develop functions and use loop control structures to reduce code redundancy How to use pipe operators to simplify code and make it more readable How to reshape the layout of data and manipulate, summarize, and join data sets
This book constitutes the refereed proceedings of the 19th International Conference on Distributed and Computer and Communication Networks, DCCN 2016, held in Moscow, Russia, in November 2016. The 50 revised full papers and the 6 revised short papers presented were carefully reviewed and selected from 141 submissions. The papers cover the following topics: computer and communication networks architecture optimization; control in computer and communication networks; performance and QoS/QoE evaluation in wireless networks; analytical modeling and simulation of next-generation communications systems; queuing theory and reliability theory applications in computer networks; wireless 4G/5G networks, cm- and mm-wave radio technologies; RFID technology and its application in intellectual transportation networks; internet of things, wearables, and applications of distributed information systems; probabilistic and statistical models in information systems; mathematical modeling of high-tech systems; mathematical modeling and control problems; distributed and cloud computing systems, big data analytics.
This edited volume lays the groundwork for Social Data Science, addressing epistemological issues, methods, technologies, software and applications of data science in the social sciences. It presents data science techniques for the collection, analysis and use of both online and offline new (big) data in social research and related applications. Among others, the individual contributions cover topics like social media, learning analytics, clustering, statistical literacy, recurrence analysis and network analysis. Data science is a multidisciplinary approach based mainly on the methods of statistics and computer science, and its aim is to develop appropriate methodologies for forecasting and decision-making in response to an increasingly complex reality often characterized by large amounts of data (big data) of various types (numeric, ordinal and nominal variables, symbolic data, texts, images, data streams, multi-way data, social networks etc.) and from diverse sources. This book presents selected papers from the international conference on Data Science & Social Research, held in Naples, Italy in February 2016, and will appeal to researchers in the social sciences working in academia as well as in statistical institutes and offices.
As Microsoft's Dynamics 365 gains ground and businesses adopt this tool, the demand for internal resources who need to understand how to support and maintain it increases. Administering, Configuring, and Maintaining Microsoft Dynamics 365 in the Cloud addresses the needs of those who support Dynamics, discussing numerous real-world scenarios that businesses must deal with when implementing Dynamics 365. Scenarios are presented with simple, fully functional walkthroughs so that non-developers can follow the instructions and learn how to address any issues that need to be resolved. The variety of concepts discussed in this book include how to: Quickly set up and configure users, teams, business units, and security Navigate through the system and present data in easy to access dashboards and SSRS reports Import data and export data, and migrate data between systems Create customized Business Process Flows, Workflows, and Business Rules Customize your Dynamics 365 instance with new entities, fields, and JavaScript Deploy and manage plugins and solutions
Originally created for agile software development, scrum provides project managers with the flexibility needed to meet ever-changing consumer demands. Presenting a modified version of the agile software development framework, Scrum Project Management introduces Scrum basics and explains how to apply this adaptive technique to effectively manage a wide range of programs and complex projects. The book provides proven planning methods for controlling project scope and ensuring your project stays on schedule. It includes scrum tracking methods to help your team maintain a focus on improving throughput and streamlining communications. It also demonstrates how to: Combine traditional project management methods with scrum Adapt the familiar work breakdown structure to create scrum backlogs and sprints Use a scrum of scrums to manage programs Apply earned value management, critical path, and PERT in the context of scrum Having successfully deployed and implemented scrum across multiple companies and departments, the authors provide valuable insight into how they achieved their past successes and how they overcame the trials involved with the deployment of a scrum environment. Throughout the text they discuss improvisation, creative problem solving, and emergent phenomena-detailing the methods needed to ensure your team achieves project success.
After the fundamental volume and the advanced technique volume, this volume focuses on R applications in the quantitative investment area. Quantitative investment has been hot for some years, and there are more and more startups working on it, combined with many other internet communities and business models. R is widely used in this area, and can be a very powerful tool. The author introduces R applications with cases from his own startup, covering topics like portfolio optimization and risk management.
This introductory textbook for business statistics teaches statistical analysis and research methods via business case studies and financial data using Excel, Minitab, and SAS. Every chapter in this textbook engages the reader with data of individual stock, stock indices, options, and futures. One studies and uses statistics to learn how to study, analyze, and understand a data set of particular interest. Some of the more popular statistical programs that have been developed to use statistical and computational methods to analyze data sets are SAS, SPSS, and Minitab. Of those, we look at Minitab and SAS in this textbook. One of the main reasons to use Minitab is that it is the easiest to use among the popular statistical programs. We look at SAS because it is the leading statistical package used in industry. We also utilize the much less costly and ubiquitous Microsoft Excel to do statistical analysis, as the benefits of Excel have become widely recognized in the academic world and its analytical capabilities extend to about 90 percent of statistical analysis done in the business world. We demonstrate much of our statistical analysis using Excel and double check the analysis and outcomes using Minitab and SAS-also helpful in some analytical methods not possible or practical to do in Excel.
Intuitive Probability and Random Processes using MATLAB (R) is an introduction to probability and random processes that merges theory with practice. Based on the author's belief that only "hands-on" experience with the material can promote intuitive understanding, the approach is to motivate the need for theory using MATLAB examples, followed by theory and analysis, and finally descriptions of "real-world" examples to acquaint the reader with a wide variety of applications. The latter is intended to answer the usual question "Why do we have to study this?" Other salient features are: *heavy reliance on computer simulation for illustration and student exercises *the incorporation of MATLAB programs and code segments *discussion of discrete random variables followed by continuous random variables to minimize confusion *summary sections at the beginning of each chapter *in-line equation explanations *warnings on common errors and pitfalls *over 750 problems designed to help the reader assimilate and extend the concepts Intuitive Probability and Random Processes using MATLAB (R) is intended for undergraduate and first-year graduate students in engineering. The practicing engineer as well as others having the appropriate mathematical background will also benefit from this book. About the Author Steven M. Kay is a Professor of Electrical Engineering at the University of Rhode Island and a leading expert in signal processing. He has received the Education Award "for outstanding contributions in education and in writing scholarly books and texts..." from the IEEE Signal Processing society and has been listed as among the 250 most cited researchers in the world in engineering.
This book discusses the problem of model choice when the statistical models are separate, also called nonnested. Chapter 1 provides an introduction, motivating examples and a general overview of the problem. Chapter 2 presents the classical or frequentist approach to the problem as well as several alternative procedures and their properties. Chapter 3 explores the Bayesian approach, the limitations of the classical Bayes factors and the proposed alternative Bayes factors to overcome these limitations. It also discusses a significance Bayesian procedure. Lastly, Chapter 4 examines the pure likelihood approach. Various real-data examples and computer simulations are provided throughout the text.
Familiarize yourself with MATLAB using this concise, practical tutorial that is focused on writing code to learn concepts. Starting from the basics, this book covers array-based computing, plotting and working with files, numerical computation formalism, and the primary concepts of approximations. Introduction to MATLAB is useful for industry engineers, researchers, and students who are looking for open-source solutions for numerical computation. In this book you will learn by doing, avoiding technical jargon, which makes the concepts easy to learn. First you'll see how to run basic calculations, absorbing technical complexities incrementally as you progress toward advanced topics. Throughout, the language is kept simple to ensure that readers at all levels can grasp the concepts. What You'll Learn Apply sample code to your engineering or science problems Work with MATLAB arrays, functions, and loops Use MATLAB's plotting functions for data visualization Solve numerical computing and computational engineering problems with a MATLAB case study Who This Book Is For Engineers, scientists, researchers, and students who are new to MATLAB. Some prior programming experience would be helpful but not required.
A unique point of this book is its low threshold, textually simple and at the same time full of self-assessment opportunities. Other unique points are the succinctness of the chapters with 3 to 6 pages, the presence of entire-commands-texts of the statistical methodologies reviewed and the fact that dull scientific texts imposing an unnecessary burden on busy and jaded professionals have been left out. For readers requesting more background, theoretical and mathematical information a note section with references is in each chapter. The first edition in 2010 was the first publication of a complete overview of SPSS methodologies for medical and health statistics. Well over 100,000 copies of various chapters were sold within the first year of publication. Reasons for a rewrite were four. First, many important comments from readers urged for a rewrite. Second, SPSS has produced many updates and upgrades, with relevant novel and improved methodologies. Third, the authors felt that the chapter texts needed some improvements for better readability: chapters have now been classified according the outcome data helpful for choosing your analysis rapidly, a schematic overview of data, and explanatory graphs have been added. Fourth, current data are increasingly complex and many important methods for analysis were missing in the first edition. For that latter purpose some more advanced methods seemed unavoidable, like hierarchical loglinear methods, gamma and Tweedie regressions and random intercept analyses. In order for the contents of the book to remain covered by the title, the authors renamed the book: SPSS for Starters and 2nd Levelers. Special care was, nonetheless, taken to keep things as simple as possible, simple menu commands are given. The arithmetic is still of a no-more-than high-school level. Step-by-step analyses of different statistical methodologies are given with the help of 60 SPSS data files available through the internet. Because of the lack of time of this busy group of people, the authors have given every effort to produce a text as succinct as possible.
The objective of Kai Zhang and his research is to assess the existing process monitoring and fault detection (PM-FD) methods. His aim is to provide suggestions and guidance for choosing appropriate PM-FD methods, because the performance assessment study for PM-FD methods has become an area of interest in both academics and industry. The author first compares basic FD statistics, and then assesses different PM-FD methods to monitor the key performance indicators of static processes, steady-state dynamic processes and general dynamic processes including transient states. He validates the theoretical developments using both benchmark and real industrial processes.
This book provides new insights on the study of global environmental changes using the ecoinformatics tools and the adaptive-evolutionary technology of geoinformation monitoring. The main advantage of this book is that it gathers and presents extensive interdisciplinary expertise in the parameterization of global biogeochemical cycles and other environmental processes in the context of globalization and sustainable development. In this regard, the crucial global problems concerning the dynamics of the nature-society system are considered and the key problems of ensuring the system's sustainable development are studied. A new approach to the numerical modeling of the nature-society system is proposed and results are provided on modeling the dynamics of the system's characteristics with regard to scenarios of anthropogenic impacts on biogeochemical cycles, land ecosystems and oceans. The main purpose of this book is to develop a universal guide to information-modeling technologies for assessing the function of environmental subsystems under various climatic and anthropogenic conditions.
This book introduces advanced undergraduate, graduate students and practitioners to statistical methods for ranking data. An important aspect of nonparametric statistics is oriented towards the use of ranking data. Rank correlation is defined through the notion of distance functions and the notion of compatibility is introduced to deal with incomplete data. Ranking data are also modeled using a variety of modern tools such as CART, MCMC, EM algorithm and factor analysis. This book deals with statistical methods used for analyzing such data and provides a novel and unifying approach for hypotheses testing. The techniques described in the book are illustrated with examples and the statistical software is provided on the authors' website.
Matrix algorithms are at the core of scientific computing and are indispensable tools in most applications in engineering. This book offers a comprehensive and up-to-date treatment of modern methods in matrix computation. It uses a unified approach to direct and iterative methods for linear systems, least squares and eigenvalue problems. A thorough analysis of the stability, accuracy, and complexity of the treated methods is given. Numerical Methods in Matrix Computations is suitable for use in courses on scientific computing and applied technical areas at advanced undergraduate and graduate level. A large bibliography is provided, which includes both historical and review papers as well as recent research papers. This makes the book useful also as a reference and guide to further study and research work.
The R Companion to Elementary Applied Statistics includes traditional applications covered in elementary statistics courses as well as some additional methods that address questions that might arise during or after the application of commonly used methods. Beginning with basic tasks and computations with R, readers are then guided through ways to bring data into R, manipulate the data as needed, perform common statistical computations and elementary exploratory data analysis tasks, prepare customized graphics, and take advantage of R for a wide range of methods that find use in many elementary applications of statistics. Features: Requires no familiarity with R or programming to begin using this book. Can be used as a resource for a project-based elementary applied statistics course, or for researchers and professionals who wish to delve more deeply into R. Contains an extensive array of examples that illustrate ideas on various ways to use pre-packaged routines, as well as on developing individualized code. Presents quite a few methods that may be considered non-traditional, or advanced. Includes accompanying carefully documented script files that contain code for all examples presented, and more. R is a powerful and free product that is gaining popularity across the scientific community in both the professional and academic arenas. Statistical methods discussed in this book are used to introduce the fundamentals of using R functions and provide ideas for developing further skills in writing R code. These ideas are illustrated through an extensive collection of examples. About the Author: Christopher Hay-Jahans received his Doctor of Arts in mathematics from Idaho State University in 1999. After spending three years at University of South Dakota, he moved to Juneau, Alaska, in 2002 where he has taught a wide range of undergraduate courses at University of Alaska Southeast.
This book primarily addresses the optimality aspects of covariate designs. A covariate model is a combination of ANOVA and regression models. Optimal estimation of the parameters of the model using a suitable choice of designs is of great importance; as such choices allow experimenters to extract maximum information for the unknown model parameters. The main emphasis of this monograph is to start with an assumed covariate model in combination with some standard ANOVA set-ups such as CRD, RBD, BIBD, GDD, BTIBD, BPEBD, cross-over, multi-factor, split-plot and strip-plot designs, treatment control designs, etc. and discuss the nature and availability of optimal covariate designs. In some situations, optimal estimations of both ANOVA and the regression parameters are provided. Global optimality and D-optimality criteria are mainly used in selecting the design. The standard optimality results of both discrete and continuous set-ups have been adapted, and several novel combinatorial techniques have been applied for the construction of optimum designs using Hadamard matrices, the Kronecker product, Rao-Khatri product, mixed orthogonal arrays to name a few. |
![]() ![]() You may like...
Extremisms In Africa
Alain Tschudin, Stephen Buchanan-Clarke, …
Paperback
![]()
Handbook of Differential Equations…
C.M. Dafermos, Milan Pokorny
Hardcover
The Trade Impact of European Union…
Luca De Benedictis, Luca Salvatici
Hardcover
R2,892
Discovery Miles 28 920
|