![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > Economic statistics
Algorithmic Trading and Quantitative Strategies provides an in-depth overview of this growing field with a unique mix of quantitative rigor and practitioner's hands-on experience. The focus on empirical modeling and practical know-how makes this book a valuable resource for students and professionals. The book starts with the often overlooked context of why and how we trade via a detailed introduction to market structure and quantitative microstructure models. The authors then present the necessary quantitative toolbox including more advanced machine learning models needed to successfully operate in the field. They next discuss the subject of quantitative trading, alpha generation, active portfolio management and more recent topics like news and sentiment analytics. The last main topic of execution algorithms is covered in detail with emphasis on the state of the field and critical topics including the elusive concept of market impact. The book concludes with a discussion of the technology infrastructure necessary to implement algorithmic strategies in large-scale production settings. A GitHub repository includes data sets and explanatory/exercise Jupyter notebooks. The exercises involve adding the correct code to solve the particular analysis/problem.
Business analytics has grown to be a key topic in business curricula, and there is a need for stronger quantitative skills and understanding of fundamental concepts. This book is intended to present key concepts related to quantitative analysis in business. It is targeted to business students, undergraduate and graduate, taking an introductory core course. Topics covered include knowledge management, visualization, sampling and hypothesis testing, regression (simple, multiple, and logistic), as well as optimization modeling. It concludes with a brief overview of data mining. Concepts are demonstrated with worked examples.
A comprehensive and up-to-date introduction to the mathematics that all economics students need to know Probability theory is the quantitative language used to handle uncertainty and is the foundation of modern statistics. Probability and Statistics for Economists provides graduate and PhD students with an essential introduction to mathematical probability and statistical theory, which are the basis of the methods used in econometrics. This incisive textbook teaches fundamental concepts, emphasizes modern, real-world applications, and gives students an intuitive understanding of the mathematics that every economist needs to know. Covers probability and statistics with mathematical rigor while emphasizing intuitive explanations that are accessible to economics students of all backgrounds Discusses random variables, parametric and multivariate distributions, sampling, the law of large numbers, central limit theory, maximum likelihood estimation, numerical optimization, hypothesis testing, and more Features hundreds of exercises that enable students to learn by doing Includes an in-depth appendix summarizing important mathematical results as well as a wealth of real-world examples Can serve as a core textbook for a first-semester PhD course in econometrics and as a companion book to Bruce E. Hansen's Econometrics Also an invaluable reference for researchers and practitioners
Computational finance is increasingly important in the financial industry, as a necessary instrument for applying theoretical models to real-world challenges. Indeed, many models used in practice involve complex mathematical problems, for which an exact or a closed-form solution is not available. Consequently, we need to rely on computational techniques and specific numerical algorithms. This book combines theoretical concepts with practical implementation. Furthermore, the numerical solution of models is exploited, both to enhance the understanding of some mathematical and statistical notions, and to acquire sound programming skills in MATLAB (R), which is useful for several other programming languages also. The material assumes the reader has a relatively limited knowledge of mathematics, probability, and statistics. Hence, the book contains a short description of the fundamental tools needed to address the two main fields of quantitative finance: portfolio selection and derivatives pricing. Both fields are developed here, with a particular emphasis on portfolio selection, where the author includes an overview of recent approaches. The book gradually takes the reader from a basic to medium level of expertise by using examples and exercises to simplify the understanding of complex models in finance, giving them the ability to place financial models in a computational setting. The book is ideal for courses focusing on quantitative finance, asset management, mathematical methods for economics and finance, investment banking, and corporate finance.
Quantile regression constitutes an ensemble of statistical techniques intended to estimate and draw inferences about conditional quantile functions. Median regression, as introduced in the 18th century by Boscovich and Laplace, is a special case. In contrast to conventional mean regression that minimizes sums of squared residuals, median regression minimizes sums of absolute residuals; quantile regression simply replaces symmetric absolute loss by asymmetric linear loss. Since its introduction in the 1970's by Koenker and Bassett, quantile regression has been gradually extended to a wide variety of data analytic settings including time series, survival analysis, and longitudinal data. By focusing attention on local slices of the conditional distribution of response variables it is capable of providing a more complete, more nuanced view of heterogeneous covariate effects. Applications of quantile regression can now be found throughout the sciences, including astrophysics, chemistry, ecology, economics, finance, genomics, medicine, and meteorology. Software for quantile regression is now widely available in all the major statistical computing environments. The objective of this volume is to provide a comprehensive review of recent developments of quantile regression methodology illustrating its applicability in a wide range of scientific settings. The intended audience of the volume is researchers and graduate students across a diverse set of disciplines.
Packed with insights, Lorenzo Bergomi's Stochastic Volatility Modeling explains how stochastic volatility is used to address issues arising in the modeling of derivatives, including: Which trading issues do we tackle with stochastic volatility? How do we design models and assess their relevance? How do we tell which models are usable and when does calibration make sense? This manual covers the practicalities of modeling local volatility, stochastic volatility, local-stochastic volatility, and multi-asset stochastic volatility. In the course of this exploration, the author, Risk's 2009 Quant of the Year and a leading contributor to volatility modeling, draws on his experience as head quant in Societe Generale's equity derivatives division. Clear and straightforward, the book takes readers through various modeling challenges, all originating in actual trading/hedging issues, with a focus on the practical consequences of modeling choices.
For one-semester courses in Introduction to Business Statistics. The gold standard in learning Microsoft Excelfor business statistics Statistics for Managers Using Microsoft (R) Excel (R), 9th Edition, Global Edition helps students develop the knowledge of Excel needed in future careers. The authors present statistics in the context of specific business fields, and now include a full chapter on business analytics. Guided by principles set forth by ASA's Guidelines for Assessment and Instruction (GAISE) reports and the authors' diverse teaching experiences, the text continues to innovate and improve the way this course is taught to students. Current data throughout gives students valuable practice analysing the types of data they will see in their professions, and the authors' friendly writing style includes tips and learning aids throughout.
Accessible to a general audience with some background in statistics and computing Many examples and extended case studies Illustrations using R and Rstudio A true blend of statistics and computer science -- not just a grab bag of topics from each
How could Finance benefit from AI? How can AI techniques provide an edge? Moving well beyond simply speeding up computation, this book tackles AI for Finance from a range of perspectives including business, technology, research, and students. Covering aspects like algorithms, big data, and machine learning, this book answers these and many other questions.
This comprehensive book is an introduction to multilevel Bayesian models in R using brms and the Stan programming language. Featuring a series of fully worked analyses of repeated-measures data, focus is placed on active learning through the analyses of the progressively more complicated models presented throughout the book. In this book, the authors offer an introduction to statistics entirely focused on repeated measures data beginning with very simple two-group comparisons and ending with multinomial regression models with many 'random effects'. Across 13 well-structured chapters, readers are provided with all the code necessary to run all the analyses and make all the plots in the book, as well as useful examples of how to interpret and write-up their own analyses. This book provides an accessible introduction for readers in any field, with any level of statistical background. Senior undergraduate students, graduate students, and experienced researchers looking to 'translate' their skills with more traditional models to a Bayesian framework, will benefit greatly from the lessons in this text.
Business Statistics with Solutions in R covers a wide range of applications of statistics in solving business related problems. It will introduce readers to quantitative tools that are necessary for daily business needs and help them to make evidence-based decisions. The book provides an insight on how to summarize data, analyze it, and draw meaningful inferences that can be used to improve decisions. It will enable readers to develop computational skills and problem-solving competence using the open source language, R. Mustapha Abiodun Akinkunmi uses real life business data for illustrative examples while discussing the basic statistical measures, probability, regression analysis, significance testing, correlation, the Poisson distribution, process control for manufacturing, time series analysis, forecasting techniques, exponential smoothing, univariate and multivariate analysis including ANOVA and MANOVA and more in this valuable reference for policy makers, professionals, academics and individuals interested in the areas of business statistics, applied statistics, statistical computing, finance, management and econometrics.
A unique and comprehensive source of information, this book is the only international publication providing economists, planners, policymakers and business people with worldwide statistics on current performance and trends in the manufacturing sector. The Yearbook is designed to facilitate international comparisons relating to manufacturing activity and industrial development and performance. It provides data which can be used to analyse patterns of growth and related long term trends, structural change, and industrial performance in individual industries. Statistics on employment patterns, wages, consumption and gross output and other key indicators are also presented.
For one-semester business statistics courses. A focus on using statistical methods to analyse and interpret results to make data-informed business decisions Statistics is essential for all business majors, and Business Statistics: A First Course helps students see the role statistics will play in their own careers by providing examples drawn from all functional areas of business. Guided by the principles set forth by major statistical and business science associations (ASA and DSI), plus the authors' diverse experiences, the 8th Edition, Global Edition, continues to innovate and improve the way this course is taught to all students. With new examples, case scenarios, and problems, the text continues its tradition of focusing on the interpretation of results, evaluation of assumptions, and discussion of next steps that lead to data-informed decision making. The authors feel that this approach, rather than a focus on manual calculations, better serves students in their future careers. This brief offering, created to fit the needs of a one-semester course, is part of the established Berenson/Levine series.
"The level is appropriate for an upper-level undergraduate or graduate-level statistics major. Sampling: Design and Analysis (SDA) will also benefit a non-statistics major with a desire to understand the concepts of sampling from a finite population. A student with patience to delve into the rigor of survey statistics will gain even more from the content that SDA offers. The updates to SDA have potential to enrich traditional survey sampling classes at both the undergraduate and graduate levels. The new discussions of low response rates, non-probability surveys, and internet as a data collection mode hold particular value, as these statistical issues have become increasingly important in survey practice in recent years… I would eagerly adopt the new edition of SDA as the required textbook." (Emily Berg, Iowa State University)
A unique and comprehensive source of information, the International Yearbook of Industrial Statistics is the only international publication providing economists, planners, policy makers and business people with worldwide statistics on current performance and trends in the manufacturing sector. This is the third issue of the annual publication which succeeds the UNIDO's Handbook of Industrial Statistics and, at the same time, replaces the United Nation's Industrial Statistics Yearbook, volume I (General Industrial Statistics). Covering more than 120 countries/areas, the 1997 edition of the Yearbook contains data which is internationally comparable and detailed in industrial classification. Information has been collected directly from national statistical sources and supplemented with estimates by UNIDO. The Yearbook is designed to facilitate international comparisons relating to manufacturing activity and industrial performance. It provides data which can be used to analyse patterns of growth, structural change and industrial performance in individual industries. Data on employment trends, wages and other key indicators are also presented. Finally, the detailed information presented here enables the user to study different aspects of individual manufacturing industries.
Now in its fifth edition, this book offers a detailed yet concise introduction to the growing field of statistical applications in finance. The reader will learn the basic methods for evaluating option contracts, analyzing financial time series, selecting portfolios and managing risks based on realistic assumptions about market behavior. The focus is both on the fundamentals of mathematical finance and financial time series analysis, and on applications to specific problems concerning financial markets, thus making the book the ideal basis for lectures, seminars and crash courses on the topic. All numerical calculations are transparent and reproducible using quantlets. For this new edition the book has been updated and extensively revised and now includes several new aspects such as neural networks, deep learning, and crypto-currencies. Both R and Matlab code, together with the data, can be downloaded from the book's product page and the Quantlet platform. The Quantlet platform quantlet.de, quantlet.com, quantlet.org is an integrated QuantNet environment consisting of different types of statistics-related documents and program codes. Its goal is to promote reproducibility and offer a platform for sharing validated knowledge native to the social web. QuantNet and the corresponding Data-Driven Documents-based visualization allow readers to reproduce the tables, pictures and calculations inside this Springer book. "This book provides an excellent introduction to the tools from probability and statistics necessary to analyze financial data. Clearly written and accessible, it will be very useful to students and practitioners alike." Yacine Ait-Sahalia, Otto Hack 1903 Professor of Finance and Economics, Princeton University
Highlights the pitfalls of data analysis and emphasizes the importance of using the appropriate metrics before making key decisions. Big data is often touted as the key to understanding almost every aspect of contemporary life. This critique of "information hubris" shows that even more important than data is finding the right metrics to evaluate it. The author, an expert in environmental design and city planning, examines the many ways in which we measure ourselves and our world. He dissects the metrics we apply to health, worker productivity, our children's education, the quality of our environment, the effectiveness of leaders, the dynamics of the economy, and the overall well-being of the planet. Among the areas where the wrong metrics have led to poor outcomes, he cites the fee-for-service model of health care, corporate cultures that emphasize time spent on the job while overlooking key productivity measures, overreliance on standardized testing in education to the detriment of authentic learning, and a blinkered focus on carbon emissions, which underestimates the impact of industrial damage to our natural world. He also examines various communities and systems that have achieved better outcomes by adjusting the ways in which they measure data. The best results are attained by those that have learned not only what to measure and how to measure it, but what it all means. By highlighting the pitfalls inherent in data analysis, this illuminating book reminds us that not everything that can be counted really counts.
Across the globe every nation wrestles with how it will pay for, provide, regulate and administer its healthcare system. Health economics is the field of economics that deals with every one of those issues and the difficult issue of allocating resources where the allocation can literally mean life or death, alleviating suffering or not. A key issue that is always mentioned, but little acted on, is the role that preventive measures play in the battle against disease and using limited healthcare resources more efficaciously. This book brings together leading researchers in the healthcare economics field presenting new research on some of these key issues such as the impact of obesity on health, children's' healthcare policies, education and health; and many more.
Statistics for Business is meant as a textbook for students in business, computer science, bioengineering, environmental technology, and mathematics. In recent years, business statistics is used widely for decision making in business endeavours. It emphasizes statistical applications, statistical model building, and determining the manual solution methods. Special Features: This text is prepared based on "self-taught" method. For most of the methods, the required algorithm is clearly explained using flow-charting methodology. More than 200 solved problems provided. More than 175 end-of-chapter exercises with answers are provided. This allows teachers ample flexibility in adopting the textbook to their individual class plans. This textbook is meant to for beginners and advanced learners as a text in Statistics for Business or Applied Statistics for undergraduate and graduate students.
This volume presents classical results of the theory of enlargement of filtration. The focus is on the behavior of martingales with respect to the enlarged filtration and related objects. The study is conducted in various contexts including immersion, progressive enlargement with a random time and initial enlargement with a random variable. The aim of this book is to collect the main mathematical results (with proofs) previously spread among numerous papers, great part of which is only available in French. Many examples and applications to finance, in particular to credit risk modelling and the study of asymmetric information, are provided to illustrate the theory. A detailed summary of further connections and applications is given in bibliographic notes which enables to deepen study of the topic. This book fills a gap in the literature and serves as a guide for graduate students and researchers interested in the role of information in financial mathematics and in econometric science. A basic knowledge of the general theory of stochastic processes is assumed as a prerequisite.
This book offers an up-to-date, comprehensive coverage of stochastic dominance and its related concepts in a unified framework. A method for ordering probability distributions, stochastic dominance has grown in importance recently as a way to measure comparisons in welfare economics, inequality studies, health economics, insurance wages, and trade patterns. Whang pays particular attention to inferential methods and applications, citing and summarizing various empirical studies in order to relate the econometric methods with real applications and using computer codes to enable the practical implementation of these methods. Intuitive explanations throughout the book ensure that readers understand the basic technical tools of stochastic dominance.
Four years ago "Research in Experimental Economics" published experimental evidence on fundraising and charitable contributions. This volume returns to the intrigue with philanthropy. Employing a mixture of laboratory and field experiments as well as theoretical research we present this new volume, "Charity with Choice." New waves of experiments are taking advantage of well calibrated environments established by past efforts to add new features to experiments such as endogeneity and self-selection. Adventurous new research programs are popping up and some of them are captured here in this volume. Among the major themes in which the tools of choice, endogeneity, and self-selection are employed are: What increases or decreases charitable activity? and How do organizational and managerial issues affect the performance of non-profit organizations?
This Volume of "Advances in Econometrics" contains a selection of papers presented initially at the 7th Annual Advances in Econometrics Conference held on the LSU campus in Baton Rouge, Louisiana during November 14-16, 2008. The theme of the conference was 'Nonparametric Econometric Methods', and the papers selected for inclusion in this Volume span a range of nonparametric techniques including kernel smoothing, empirical copulas, series estimators, and smoothing splines along with a variety of semiparametric methods. The papers in this Volume cover topics of interest to those who wish to familiarize themselves with current nonparametric methodology. Many papers also identify areas deserving of future attention. There exist survey papers devoted to recent developments in nonparametric nance, constrained nonparametric regression, miparametric/nonparametric environmental econometrics and nonparametric models with non-stationary data. There exist theoretical papers dealing with novel approaches for partial identification of the distribution of treatment effects, xed effects semiparametric panel data models, functional coefficient models with time series data, exponential series estimators of empirical copulas, estimation of multivariate CDFs and bias-reduction methods for density estimation. There also exist a number of applications that analyze returns to education, the evolution of income and life expectancy, the role of governance in growth, farm production, city size and unemployment rates, derivative pricing, and environmental pollution and economic growth. In short, this Volume contains a range of theoretical developments, surveys, and applications that would be of interest to those who wish to keep abreast of some of the most important current developments in the field of nonparametric estimation.
Developed from the author's course on Monte Carlo simulation at Brown University, Monte Carlo Simulation with Applications to Finance provides a self-contained introduction to Monte Carlo methods in financial engineering. It is suitable for advanced undergraduate and graduate students taking a one-semester course or for practitioners in the financial industry. The author first presents the necessary mathematical tools for simulation, arbitrary free option pricing, and the basic implementation of Monte Carlo schemes. He then describes variance reduction techniques, including control variates, stratification, conditioning, importance sampling, and cross-entropy. The text concludes with stochastic calculus and the simulation of diffusion processes. Only requiring some familiarity with probability and statistics, the book keeps much of the mathematics at an informal level and avoids technical measure-theoretic jargon to provide a practical understanding of the basics. It includes a large number of examples as well as MATLAB (R) coding exercises that are designed in a progressive manner so that no prior experience with MATLAB is needed.
Statistical Programming in SAS Second Edition provides a foundation for programming to implement statistical solutions using SAS, a system that has been used to solve data analytic problems for more than 40 years. The author includes motivating examples to inspire readers to generate programming solutions. Upper-level undergraduates, beginning graduate students, and professionals involved in generating programming solutions for data-analytic problems will benefit from this book. The ideal background for a reader is some background in regression modeling and introductory experience with computer programming. The coverage of statistical programming in the second edition includes Getting data into the SAS system, engineering new features, and formatting variables Writing readable and well-documented code Structuring, implementing, and debugging programs that are well documented Creating solutions to novel problems Combining data sources, extracting parts of data sets, and reshaping data sets as needed for other analyses Generating general solutions using macros Customizing output Producing insight-inspiring data visualizations Parsing, processing, and analyzing text Programming solutions using matrices and connecting to R Processing text Programming with matrices Connecting SAS with R Covering topics that are part of both base and certification exams. |
You may like...
The London Philatelist; v. 6 1897
Royal Philatelic Society (Great Brita
Hardcover
R923
Discovery Miles 9 230
|