![]() |
![]() |
Your cart is empty |
||
Books > Business & Economics > Economics > Econometrics > Economic statistics
The chapters in this book describe various aspects of the application of statistical methods in finance. It will interest and attract statisticians to this area, illustrate some of the many ways that statistical tools are used in financial applications, and give some indication of problems which are still outstanding. The statisticians will be stimulated to learn more about the kinds of models and techniques outlined in the book - both the domain of finance and the science of statistics will benefit from increased awareness by statisticians of the problems, models, and techniques applied in financial applications. For this reason, extensive references are given. The level of technical detail varies between the chapters. Some present broad non-technical overviews of an area, while others describe the mathematical niceties. This illustrates both the range of possibilities available in the area for statisticians, while simultaneously giving a flavour of the different kinds of mathematical and statistical skills required. Whether you favour data analysis or mathematical manipulation, if you are a statistician there are problems in finance which are appropriate to your skills.
The book focuses on problem solving for practitioners and model building for academicians under multivariate situations. This book helps readers in understanding the issues, such as knowing variability, extracting patterns, building relationships, and making objective decisions. A large number of multivariate statistical models are covered in the book. The readers will learn how a practical problem can be converted to a statistical problem and how the statistical solution can be interpreted as a practical solution. Key features: Links data generation process with statistical distributions in multivariate domain Provides step by step procedure for estimating parameters of developed models Provides blueprint for data driven decision making Includes practical examples and case studies relevant for intended audiences The book will help everyone involved in data driven problem solving, modeling and decision making.
This compendium contains and explains essential statistical formulas within an economic context. A broad range of aids and supportive examples will help readers to understand the formulas and their practical applications. This statistical formulary is presented in a practice-oriented, clear, and understandable manner, as it is needed for meaningful and relevant application in global business, as well as in the academic setting and economic practice. The topics presented include, but are not limited to: statistical signs and symbols, descriptive statistics, empirical distributions, ratios and index figures, correlation analysis, regression analysis, inferential statistics, probability calculation, probability distributions, theoretical distributions, statistical estimation methods, confidence intervals, statistical testing methods, the Peren-Clement index, and the usual statistical tables. Given its scope, the book offers an indispensable reference guide and is a must-read for undergraduate and graduate students, as well as managers, scholars, and lecturers in business, politics, and economics.
Volume 40 in the Advances in Econometrics series features twenty-three chapters that are split thematically into two parts. Part A presents novel contributions to the analysis of time series and panel data with applications in macroeconomics, finance, cognitive science and psychology, neuroscience, and labor economics. Part B examines innovations in stochastic frontier analysis, nonparametric and semiparametric modeling and estimation, A/B experiments, big-data analysis, and quantile regression. Individual chapters, written by both distinguished researchers and promising young scholars, cover many important topics in statistical and econometric theory and practice. Papers primarily, though not exclusively, adopt Bayesian methods for estimation and inference, although researchers of all persuasions should find considerable interest in the chapters contained in this work. The volume was prepared to honor the career and research contributions of Professor Dale J. Poirier. For researchers in econometrics, this volume includes the most up-to-date research across a wide range of topics.
Plenty of literature review and applications of various tests provided to cover all the aspects of research methodology Various examination questions have been provided Strong Pedagogy along with regular features such as Concept Checks, Text Overviews, Key Terms, Review Questions, Exercises and References Though the book is primarily addressed to students,it will be equally useful to Researchers and Entrepreneurs More than other research textbooks, this book addresses the students' need to comprehend all aspects of the research process which includes Research process, clarification of the research problem, Ethical issues, Survey research, Research report preparation and presentation.
Develop the analytical skills that are in high demand in businesses today with Camm/Cochran/Fry/Ohlmann's best-selling BUSINESS ANALYTICS, 5E. You master the full range of analytics as you strengthen descriptive, predictive and prescriptive analytic skills. Real examples and memorable visuals clearly illustrate data and results. Step-by-step instructions guide you through using Excel, Tableau, R or the Python-based Orange data mining software to perform advanced analytics. Practical, relevant problems at all levels of difficulty let you apply what you've learned. Updates throughout this edition address topics beyond traditional quantitative concepts, such as data wrangling, data visualization and data mining, which are increasingly important in today's business environment. MindTap and WebAssign online learning platforms are also available with an interactive eBook, algorithmic practice problems and Exploring Analytics visualizations to strengthen your understanding of key concepts.
Time Series: A First Course with Bootstrap Starter provides an introductory course on time series analysis that satisfies the triptych of (i) mathematical completeness, (ii) computational illustration and implementation, and (iii) conciseness and accessibility to upper-level undergraduate and M.S. students. Basic theoretical results are presented in a mathematically convincing way, and the methods of data analysis are developed through examples and exercises parsed in R. A student with a basic course in mathematical statistics will learn both how to analyze time series and how to interpret the results. The book provides the foundation of time series methods, including linear filters and a geometric approach to prediction. The important paradigm of ARMA models is studied in-depth, as well as frequency domain methods. Entropy and other information theoretic notions are introduced, with applications to time series modeling. The second half of the book focuses on statistical inference, the fitting of time series models, as well as computational facets of forecasting. Many time series of interest are nonlinear in which case classical inference methods can fail, but bootstrap methods may come to the rescue. Distinctive features of the book are the emphasis on geometric notions and the frequency domain, the discussion of entropy maximization, and a thorough treatment of recent computer-intensive methods for time series such as subsampling and the bootstrap. There are more than 600 exercises, half of which involve R coding and/or data analysis. Supplements include a website with 12 key data sets and all R code for the book's examples, as well as the solutions to exercises.
Virtually any random process developing chronologically can be viewed as a time series. In economics closing prices of stocks, the cost of money, the jobless rate, and retail sales are just a few examples of many. Developed from course notes and extensively classroom-tested, Applied Time Series Analysis with R, Second Edition includes examples across a variety of fields, develops theory, and provides an R-based software package to aid in addressing time series problems in a broad spectrum of fields. The material is organized in an optimal format for graduate students in statistics as well as in the natural and social sciences to learn to use and understand the tools of applied time series analysis. Features Gives readers the ability to actually solve significant real-world problems Addresses many types of nonstationary time series and cutting-edge methodologies Promotes understanding of the data and associated models rather than viewing it as the output of a "black box" Provides the R package tswge available on CRAN which contains functions and over 100 real and simulated data sets to accompany the book. Extensive help regarding the use of tswge functions is provided in appendices and on an associated website. Over 150 exercises and extensive support for instructors The second edition includes additional real-data examples, uses R-based code that helps students easily analyze data, generate realizations from models, and explore the associated characteristics. It also adds discussion of new advances in the analysis of long memory data and data with time-varying frequencies (TVF).
This textbook provides future data analysts with the tools, methods, and skills needed to answer data-focused, real-life questions; to carry out data analysis; and to visualize and interpret results to support better decisions in business, economics, and public policy. Data wrangling and exploration, regression analysis, machine learning, and causal analysis are comprehensively covered, as well as when, why, and how the methods work, and how they relate to each other. As the most effective way to communicate data analysis, running case studies play a central role in this textbook. Each case starts with an industry-relevant question and answers it by using real-world data and applying the tools and methods covered in the textbook. Learning is then consolidated by 360 practice questions and 120 data exercises. Extensive online resources, including raw and cleaned data and codes for all analysis in Stata, R, and Python, can be found at www.gabors-data-analysis.com.
A unique and comprehensive source of information, this book is the only international publication providing economists, planners, policymakers and business people with worldwide statistics on current performance and trends in the manufacturing sector. The Yearbook is designed to facilitate international comparisons relating to manufacturing activity and industrial development and performance. It provides data which can be used to analyse patterns of growth and related long term trends, structural change and industrial performance in individual industries. Statistics on employment patterns, wages, consumption and gross output and other key indicators are also presented.
The first book for a popular audience on the transformative, democratising technology of 'DeFi'. After over a decade of Bitcoin, which has now moved beyond lore and hype into an increasingly robust star in the firmament of global assets, a new and more important question has arisen. What happens beyond Bitcoin? The answer is decentralised finance - 'DeFi'. Tech and finance experts Steven Boykey Sidley and Simon Dingle argue that DeFi - which enables all manner of financial transactions to take place directly, person to person, without the involvement of financial institutions - will redesign the cogs and wheels in the engines of trust, and make the remarkable rise of Bitcoin look quaint by comparison. It will disrupt and displace fine and respectable companies, if not entire industries. Sidley and Dingle explain how DeFi works, introduce the organisations and individuals that comprise the new industry, and identify the likely winners and losers in the coming revolution.
Quantile regression constitutes an ensemble of statistical techniques intended to estimate and draw inferences about conditional quantile functions. Median regression, as introduced in the 18th century by Boscovich and Laplace, is a special case. In contrast to conventional mean regression that minimizes sums of squared residuals, median regression minimizes sums of absolute residuals; quantile regression simply replaces symmetric absolute loss by asymmetric linear loss. Since its introduction in the 1970's by Koenker and Bassett, quantile regression has been gradually extended to a wide variety of data analytic settings including time series, survival analysis, and longitudinal data. By focusing attention on local slices of the conditional distribution of response variables it is capable of providing a more complete, more nuanced view of heterogeneous covariate effects. Applications of quantile regression can now be found throughout the sciences, including astrophysics, chemistry, ecology, economics, finance, genomics, medicine, and meteorology. Software for quantile regression is now widely available in all the major statistical computing environments. The objective of this volume is to provide a comprehensive review of recent developments of quantile regression methodology illustrating its applicability in a wide range of scientific settings. The intended audience of the volume is researchers and graduate students across a diverse set of disciplines.
If you know a little bit about financial mathematics but don't yet know a lot about programming, then C++ for Financial Mathematics is for you. C++ is an essential skill for many jobs in quantitative finance, but learning it can be a daunting prospect. This book gathers together everything you need to know to price derivatives in C++ without unnecessary complexities or technicalities. It leads the reader step-by-step from programming novice to writing a sophisticated and flexible financial mathematics library. At every step, each new idea is motivated and illustrated with concrete financial examples. As employers understand, there is more to programming than knowing a computer language. As well as covering the core language features of C++, this book teaches the skills needed to write truly high quality software. These include topics such as unit tests, debugging, design patterns and data structures. The book teaches everything you need to know to solve realistic financial problems in C++. It can be used for self-study or as a textbook for an advanced undergraduate or master's level course.
Modern economies are full of uncertainties and risk. Economics studies resource allocations in an uncertain market environment. As a generally applicable quantitative analytic tool for uncertain events, probability and statistics have been playing an important role in economic research. Econometrics is statistical analysis of economic and financial data. In the past four decades or so, economics has witnessed a so-called 'empirical revolution' in its research paradigm, and as the main methodology in empirical studies in economics, econometrics has been playing an important role. It has become an indispensable part of training in modern economics, business and management.This book develops a coherent set of econometric theory, methods and tools for economic models. It is written as a textbook for graduate students in economics, business, management, statistics, applied mathematics, and related fields. It can also be used as a reference book on econometric theory by scholars who may be interested in both theoretical and applied econometrics.
'Fascinating . . . timely' Daily Mail 'Refreshingly clear and engaging' Tim Harford 'Delightful . . . full of unique insights' Prof Sir David Spiegelhalter There's no getting away from statistics. We encounter them every day. We are all users of statistics whether we like it or not. Do missed appointments really cost the NHS GBP1bn per year? What's the difference between the mean gender pay gap and the median gender pay gap? How can we work out if a claim that we use 42 billion single-use plastic straws per year in the UK is accurate? What did the Vote Leave campaign's GBP350m bus really mean? How can we tell if the headline 'Public pensions cost you GBP4,000 a year' is correct? Does snow really cost the UK economy GBP1bn per day? But how do we distinguish statistical fact from fiction? What can we do to decide whether a number, claim or news story is accurate? Without an understanding of data, we cannot truly understand what is going on in the world around us. Written by Anthony Reuben, the BBC's first head of statistics, Statistical is an accessible and empowering guide to challenging the numbers all around us.
Computational finance is increasingly important in the financial industry, as a necessary instrument for applying theoretical models to real-world challenges. Indeed, many models used in practice involve complex mathematical problems, for which an exact or a closed-form solution is not available. Consequently, we need to rely on computational techniques and specific numerical algorithms. This book combines theoretical concepts with practical implementation. Furthermore, the numerical solution of models is exploited, both to enhance the understanding of some mathematical and statistical notions, and to acquire sound programming skills in MATLAB (R), which is useful for several other programming languages also. The material assumes the reader has a relatively limited knowledge of mathematics, probability, and statistics. Hence, the book contains a short description of the fundamental tools needed to address the two main fields of quantitative finance: portfolio selection and derivatives pricing. Both fields are developed here, with a particular emphasis on portfolio selection, where the author includes an overview of recent approaches. The book gradually takes the reader from a basic to medium level of expertise by using examples and exercises to simplify the understanding of complex models in finance, giving them the ability to place financial models in a computational setting. The book is ideal for courses focusing on quantitative finance, asset management, mathematical methods for economics and finance, investment banking, and corporate finance.
Now in its fifth edition, this book offers a detailed yet concise introduction to the growing field of statistical applications in finance. The reader will learn the basic methods for evaluating option contracts, analyzing financial time series, selecting portfolios and managing risks based on realistic assumptions about market behavior. The focus is both on the fundamentals of mathematical finance and financial time series analysis, and on applications to specific problems concerning financial markets, thus making the book the ideal basis for lectures, seminars and crash courses on the topic. All numerical calculations are transparent and reproducible using quantlets. For this new edition the book has been updated and extensively revised and now includes several new aspects such as neural networks, deep learning, and crypto-currencies. Both R and Matlab code, together with the data, can be downloaded from the book's product page and the Quantlet platform. The Quantlet platform quantlet.de, quantlet.com, quantlet.org is an integrated QuantNet environment consisting of different types of statistics-related documents and program codes. Its goal is to promote reproducibility and offer a platform for sharing validated knowledge native to the social web. QuantNet and the corresponding Data-Driven Documents-based visualization allow readers to reproduce the tables, pictures and calculations inside this Springer book. "This book provides an excellent introduction to the tools from probability and statistics necessary to analyze financial data. Clearly written and accessible, it will be very useful to students and practitioners alike." Yacine Ait-Sahalia, Otto Hack 1903 Professor of Finance and Economics, Princeton University
This book pulls together robust practices in Partial Least Squares Structural Equation Modeling (PLS-SEM) from other disciplines and shows how they can be used in the area of Banking and Finance. In terms of empirical analysis techniques, Banking and Finance is a conservative discipline. As such, this book will raise awareness of the potential of PLS-SEM for application in various contexts. PLS-SEM is a non-parametric approach designed to maximize explained variance in latent constructs. Latent constructs are directly unobservable phenomena such as customer service quality and managerial competence. Explained variance refers to the extent we can predict, say, customer service quality, by examining other theoretically related latent constructs such as conduct of staff and communication skills. Examples of latent constructs at the microeconomic level include customer service quality, managerial effectiveness, perception of market leadership, etc.; macroeconomic-level latent constructs would be found in contagion of systemic risk from one financial sector to another, herd behavior among fund managers, risk tolerance in financial markets, etc. Behavioral Finance is bound to provide a wealth of opportunities for applying PLS-SEM. The book is designed to expose robust processes in application of PLS-SEM, including use of various software packages and codes, including R. PLS-SEM is already a popular tool in marketing and management information systems used to explain latent constructs. Until now, PLS-SEM has not enjoyed a wide acceptance in Banking and Finance. Based on recent research developments, this book represents the first collection of PLS-SEM applications in Banking and Finance. This book will serve as a reference book for those researchers keen on adopting PLS-SEM to explain latent constructs in Banking and Finance.
A unique and comprehensive source of information, this book is the only international publication providing economists, planners, policymakers and business people with worldwide statistics on current performance and trends in the manufacturing sector. The Yearbook is designed to facilitate international comparisons relating to manufacturing activity and industrial development and performance. It provides data which can be used to analyse patterns of growth and related long term trends, structural change, and industrial performance in individual industries. Statistics on employment patterns, wages, consumption and gross output and other key indicators are also presented.
Volume 27 of the International Symposia in Economic Theory and Econometrics series collects a range of unique and diverse chapters, each investigating different spheres of development in emerging markets with a specific focus on significant engines of growth and advancement in the Asia-Pacific economies. Looking at the most sensitive issues behind economic growth in emerging markets, and particularly their long-term prospects, the chapters included in this volume explore the newest fields of research to understand the potential of these markets better. Including chapters from leading scholars worldwide, the volume provides comprehensive coverage of the key topics in fields spanning SMEs, terrorism, manufacturing waste reduction, financial literacy, female empowerment, leadership and corporate management, and the relationship between environmental, social, governance, and firm value. For students, researchers and practitioners, this volume offers a dynamic reference resource on emerging markets across a diverse range of topics.
A classic text for accuracy and statistical precision. Statistics for Business and Economics enables readers to conduct serious analysis of applied problems rather than running simple "canned" applications. This text is also at a mathematically higher level than most business statistics texts and provides readers with the knowledge they need to become stronger analysts for future managerial positions. The eighth edition of this book has been revised and updated to provide readers with improved problem contexts for learning how statistical methods can improve their analysis and understanding of business and economics.
Bayesian Statistical Methods provides data scientists with the foundational and computational tools needed to carry out a Bayesian analysis. This book focuses on Bayesian methods applied routinely in practice including multiple linear regression, mixed effects models and generalized linear models (GLM). The authors include many examples with complete R code and comparisons with analogous frequentist procedures. In addition to the basic concepts of Bayesian inferential methods, the book covers many general topics: Advice on selecting prior distributions Computational methods including Markov chain Monte Carlo (MCMC) Model-comparison and goodness-of-fit measures, including sensitivity to priors Frequentist properties of Bayesian methods Case studies covering advanced topics illustrate the flexibility of the Bayesian approach: Semiparametric regression Handling of missing data using predictive distributions Priors for high-dimensional regression models Computational techniques for large datasets Spatial data analysis The advanced topics are presented with sufficient conceptual depth that the reader will be able to carry out such analysis and argue the relative merits of Bayesian and classical methods. A repository of R code, motivating data sets, and complete data analyses are available on the book's website. Brian J. Reich, Associate Professor of Statistics at North Carolina State University, is currently the editor-in-chief of the Journal of Agricultural, Biological, and Environmental Statistics and was awarded the LeRoy & Elva Martin Teaching Award. Sujit K. Ghosh, Professor of Statistics at North Carolina State University, has over 22 years of research and teaching experience in conducting Bayesian analyses, received the Cavell Brownie mentoring award, and served as the Deputy Director at the Statistical and Applied Mathematical Sciences Institute.
This textbook addresses postgraduate students in applied mathematics, probability, and statistics, as well as computer scientists, biologists, physicists and economists, who are seeking a rigorous introduction to applied stochastic processes. Pursuing a pedagogic approach, the content follows a path of increasing complexity, from the simplest random sequences to the advanced stochastic processes. Illustrations are provided from many applied fields, together with connections to ergodic theory, information theory, reliability and insurance. The main content is also complemented by a wealth of examples and exercises with solutions. |
![]() ![]() You may like...
The Leading Indicators - A Short History…
Zachary Karabell
Paperback
Operations And Supply Chain Management
David Collier, James Evans
Hardcover
Quantitative statistical techniques
Swanepoel Swanepoel, Vivier Vivier, …
Paperback
![]()
Statistics for Business & Economics…
James McClave, P Benson, …
Paperback
R2,409
Discovery Miles 24 090
Statistics for Business and Economics…
Paul Newbold, William Carlson, …
Paperback
R2,509
Discovery Miles 25 090
Business Statistics, Global Edition
Norean Sharpe, Richard De Veaux, …
Paperback
R2,403
Discovery Miles 24 030
|