![]() |
![]() |
Your cart is empty |
||
Books > Business & Economics > Economics > Econometrics > Economic statistics
The purpose of this book is to introduce novice researchers to the tools of meta-analysis and meta-regression analysis and to summarize the state of the art for existing practitioners. Meta-regression analysis addresses the rising "Tower of Babel" that current economics and business research has become. Meta-analysis is the statistical analysis of previously published, or reported, research findings on a given hypothesis, empirical effect, phenomenon, or policy intervention. It is a systematic review of all the relevant scientific knowledge on a specific subject and is an essential part of the evidence-based practice movement in medicine, education and the social sciences. However, research in economics and business is often fundamentally different from what is found in the sciences and thereby requires different methods for its synthesis-meta-regression analysis. This book develops, summarizes, and applies these meta-analytic methods.
The state-space approach provides a formal framework where any result or procedure developed for a basic model can be seamlessly applied to a standard formulation written in state-space form. Moreover, it can accommodate with a reasonable effort nonstandard situations, such as observation errors, aggregation constraints, or missing in-sample values. Exploring the advantages of this approach, State-Space Methods for Time Series Analysis: Theory, Applications and Software presents many computational procedures that can be applied to a previously specified linear model in state-space form. After discussing the formulation of the state-space model, the book illustrates the flexibility of the state-space representation and covers the main state estimation algorithms: filtering and smoothing. It then shows how to compute the Gaussian likelihood for unknown coefficients in the state-space matrices of a given model before introducing subspace methods and their application. It also discusses signal extraction, describes two algorithms to obtain the VARMAX matrices corresponding to any linear state-space model, and addresses several issues relating to the aggregation and disaggregation of time series. The book concludes with a cross-sectional extension to the classical state-space formulation in order to accommodate longitudinal or panel data. Missing data is a common occurrence here, and the book explains imputation procedures necessary to treat missingness in both exogenous and endogenous variables. Web Resource The authors' E4 MATLAB (R) toolbox offers all the computational procedures, administrative and analytical functions, and related materials for time series analysis. This flexible, powerful, and free software tool enables readers to replicate the practical examples in the text and apply the procedures to their own work.
The sixth edition of the Balance of Payments and International Investment Position Manual presents revised and updated standards for concepts, definitions, and classifications for international accounts statistics. These standards are used globally to compile comprehensive and comparable data and this edition is the latest in a series that the IMF began in 1948. It is the result of widespread consultation and provides elaboration and clarification requested by users. In addition, it focuses on developments such as globalization, financial market innovation, and increasing interest in balance sheet analysis.
Ranking of Multivariate Populations: A Permutation Approach with Applications presents a novel permutation-based nonparametric approach for ranking several multivariate populations. Using data collected from both experimental and observation studies, it covers some of the most useful designs widely applied in research and industry investigations, such as multivariate analysis of variance (MANOVA) and multivariate randomized complete block (MRCB) designs. The first section of the book introduces the topic of ranking multivariate populations by presenting the main theoretical ideas and an in-depth literature review. The second section discusses a large number of real case studies from four specific research areas: new product development in industry, perceived quality of the indoor environment, customer satisfaction, and cytological and histological analysis by image processing. A web-based nonparametric combination global ranking software is also described. Designed for practitioners and postgraduate students in statistics and the applied sciences, this application-oriented book offers a practical guide to the reliable global ranking of multivariate items, such as products, processes, and services, in terms of the performance of all investigated products/prototypes.
Originally published in 2005, Weather Derivative Valuation covers all the meteorological, statistical, financial and mathematical issues that arise in the pricing and risk management of weather derivatives. There are chapters on meteorological data and data cleaning, the modelling and pricing of single weather derivatives, the modelling and valuation of portfolios, the use of weather and seasonal forecasts in the pricing of weather derivatives, arbitrage pricing for weather derivatives, risk management, and the modelling of temperature, wind and precipitation. Specific issues covered in detail include the analysis of uncertainty in weather derivative pricing, time-series modelling of daily temperatures, the creation and use of probabilistic meteorological forecasts and the derivation of the weather derivative version of the Black-Scholes equation of mathematical finance. Written by consultants who work within the weather derivative industry, this book is packed with practical information and theoretical insight into the world of weather derivative pricing.
A unique and comprehensive source of information, the International Yearbook of Industrial Statistics is the only international publication providing economists, planners, policy makers and business people with worldwide statistics on current performance and trends in the manufacturing sector.This is the first issue of the annual publication which succeeds the UNIDO's Handbook of Industrial Statistics and, at the same time, replaces the United Nation's Industrial Statistics Yearbook, volume I (General Industrial Statistics). Covering more than 120 countries/areas, the new version contains data which is internationally comparable and much more detailed than that supplied in previous publications. Information has been collected directly from national statistical sources and supplemented with estimates by UNIDO. The Yearbook is designed to facilitate international comparisons relating to manufacturing activity and industrial performance. It provides data which can be used to analyse patterns of growth, structural change and industrial performance in individual industries. Data on employment trends, wages and other key indicators are also presented. Finally, the detailed information presented here enables the user to study different aspects of industry which was not possible using the aggregate data previously available.
Model a Wide Range of Count Time Series Handbook of Discrete-Valued Time Series presents state-of-the-art methods for modeling time series of counts and incorporates frequentist and Bayesian approaches for discrete-valued spatio-temporal data and multivariate data. While the book focuses on time series of counts, some of the techniques discussed can be applied to other types of discrete-valued time series, such as binary-valued or categorical time series. Explore a Balanced Treatment of Frequentist and Bayesian Perspectives Accessible to graduate-level students who have taken an elementary class in statistical time series analysis, the book begins with the history and current methods for modeling and analyzing univariate count series. It next discusses diagnostics and applications before proceeding to binary and categorical time series. The book then provides a guide to modern methods for discrete-valued spatio-temporal data, illustrating how far modern applications have evolved from their roots. The book ends with a focus on multivariate and long-memory count series. Get Guidance from Masters in the Field Written by a cohesive group of distinguished contributors, this handbook provides a unified account of the diverse techniques available for observation- and parameter-driven models. It covers likelihood and approximate likelihood methods, estimating equations, simulation methods, and a Bayesian approach for model fitting.
Accessible to a general audience with some background in statistics and computing Many examples and extended case studies Illustrations using R and Rstudio A true blend of statistics and computer science -- not just a grab bag of topics from each
How we pay is so fundamental that it underpins everything – from trade to taxation, stocks and savings to salaries, pensions and pocket money. Rich or poor, criminal, communist or capitalist, we all rely on the same payments system, day in, day out. It sits between us and not just economic meltdown, but a total breakdown in law and order. Why then do we know so little about how that system really works? Leibbrandt and de Terán shine a light on the hidden workings of the humble payment – and reveal both how our payment habits are determined by history as well as where we might go next. From national customs to warring nation states, geopolitics will shape the future of payments every bit as much as technology. Challenging our understanding about where financial power really lies, The Pay Off shows us that the most important thing about money is the way we move it.
* Starts from the basics, focusing less on proofs and the high-level math underlying regressions, and adopts an engaging tone to provide a text which is entirely accessible to students who don't have a stats background * New chapter on integrity and ethics in regression analysis * Each chapter offers boxed examples, stories, exercises and clear summaries, all of which are designed to support student learning * Optional appendix of statistical tools, providing a primer to readers who need it * Code in R and Stata, and data sets and exercises in Stata and CSV, to allow students to practice running their own regressions * Author-created videos on YouTube * PPT lecture slides and test bank for instructors
"Advances in Econometrics and Quantitative Economics" is a comprehensive guide to the statistical methods used in econometrics and quantitative economics. Bringing together contributions from those acknowledged to be amongst the world's leading econometricians and statisticians this volume covers topics such as: * Semiparametric and non-parametric interference. The book is dedicated to Professor C. R. Rao, whose unique contribution to the subject has influenced econometricians for many years.
* Explores the exciting and new topic of econophysics * Multidisciplinary approach, that will be of interest to students and researchers from physics, engineering, mathematics, statistics, and other physical sciences * Useful to both students and researchers
This book analyzes the institutional underpinnings of East Asia's dynamic growth by exploring the interplay between governance and flexibility. As the challenges of promoting and sustaining economic growth become ever more complex, firms in both advanced and industrializing countries face constant pressures for change from markets and technology. Globalization, heightened competition, and shorter product cycles mean that markets are increasingly volatile and fragmented. To contend with demands for higher quality, quicker delivery, and cost efficiencies, firms must enhance their capability to innovate and diversify. Achieving this flexibility, in turn, often requires new forms of governance arrangements that facilitate the exchange of resources among diverse yet interdependent economic actors. Moving beyond the literature's emphasis on developed economies, this volume emphasizes the relevance of the links between governance and flexibility for understanding East Asia's explosive economic growth over the past quarter century. In case studies that encompass a variety of key industrial sectors and countries, the contributors emphasize the importance of network patterns of governance for facilitating flexibility in firms throughout the region. Their analyses illuminate both the strengths and limitations of recent growth strategies and offer insights into prospects for continued expansion in the wake of the East Asian economic crisis of the late 1990s. Contributions by: Richard P. Appelbaum, Lu-lin Cheng, Stephen W. K. Chiu, Frederic C. Deyo, Richard F. Doner, Dieter Ernst, Eric Hershberg, Tai Lok Lui, Rajah Rasiah, David A. Smith, and Poh-Kam Wong.
Analytics is one of a number of terms which are used to describe a data-driven more scientific approach to management. Ability in analytics is an essential management skill: knowledge of data and analytics helps the manager to analyze decision situations, prevent problem situations from arising, identify new opportunities, and often enables many millions of dollars to be added to the bottom line for the organization. The objective of this book is to introduce analytics from the perspective of the general manager of a corporation. Rather than examine the details or attempt an encyclopaedic review of the field, this text emphasizes the strategic role that analytics is playing in globally competitive corporations today. The chapters of this book are organized in two main parts. The first part introduces a problem area and presents some basic analytical concepts that have been successfully used to address the problem area. The objective of this material is to provide the student, the manager of the future, with a general understanding of the tools and techniques used by the analyst.
Doing Statistical Analysis looks at three kinds of statistical research questions - descriptive, associational, and inferential - and shows students how to conduct statistical analyses and interpret the results. Keeping equations to a minimum, it uses a conversational style and relatable examples such as football, COVID-19, and tourism, to aid understanding. Each chapter contains practice exercises, and a section showing students how to reproduce the statistical results in the book using Stata and SPSS. Digital supplements consist of data sets in Stata, SPSS, and Excel, and a test bank for instructors. Its accessible approach means this is the ideal textbook for undergraduate students across the social and behavioral sciences needing to build their confidence with statistical analysis.
Introduction to Functional Data Analysis provides a concise textbook introduction to the field. It explains how to analyze functional data, both at exploratory and inferential levels. It also provides a systematic and accessible exposition of the methodology and the required mathematical framework. The book can be used as textbook for a semester-long course on FDA for advanced undergraduate or MS statistics majors, as well as for MS and PhD students in other disciplines, including applied mathematics, environmental science, public health, medical research, geophysical sciences and economics. It can also be used for self-study and as a reference for researchers in those fields who wish to acquire solid understanding of FDA methodology and practical guidance for its implementation. Each chapter contains plentiful examples of relevant R code and theoretical and data analytic problems. The material of the book can be roughly divided into four parts of approximately equal length: 1) basic concepts and techniques of FDA, 2) functional regression models, 3) sparse and dependent functional data, and 4) introduction to the Hilbert space framework of FDA. The book assumes advanced undergraduate background in calculus, linear algebra, distributional probability theory, foundations of statistical inference, and some familiarity with R programming. Other required statistics background is provided in scalar settings before the related functional concepts are developed. Most chapters end with references to more advanced research for those who wish to gain a more in-depth understanding of a specific topic.
Developed from the author's course on Monte Carlo simulation at Brown University, Monte Carlo Simulation with Applications to Finance provides a self-contained introduction to Monte Carlo methods in financial engineering. It is suitable for advanced undergraduate and graduate students taking a one-semester course or for practitioners in the financial industry. The author first presents the necessary mathematical tools for simulation, arbitrary free option pricing, and the basic implementation of Monte Carlo schemes. He then describes variance reduction techniques, including control variates, stratification, conditioning, importance sampling, and cross-entropy. The text concludes with stochastic calculus and the simulation of diffusion processes. Only requiring some familiarity with probability and statistics, the book keeps much of the mathematics at an informal level and avoids technical measure-theoretic jargon to provide a practical understanding of the basics. It includes a large number of examples as well as MATLAB(r) coding exercises that are designed in a progressive manner so that no prior experience with MATLAB is needed.
This work is an examination of borderless markets where national boundaries are no longer the relevant criteria in making international marketing, economic planning, and business decisions. Understanding nonpolitical borders is especially important for products and industries that are culture bound and those that require local adaptation. Language is often one critical factor that affects economic development, demographic behavior, and general business policies around the world. Over 130,000 statistics are provided for over 460 language groups covering a number of social, economic, and business variables. A significant review of literature is also included.
Leverage the full power of Bayesian analysis for competitive advantage Bayesian methods can solve problems you can't reliably handle any other way. Building on your existing Excel analytics skills and experience, Microsoft Excel MVP Conrad Carlberg helps you make the most of Excel's Bayesian capabilities and move toward R to do even more. Step by step, with real-world examples, Carlberg shows you how to use Bayesian analytics to solve a wide array of real problems. Carlberg clarifies terminology that often bewilders analysts, and offers sample R code to take advantage of the rethinking package in R and its gateway to Stan. As you incorporate these Bayesian approaches into your analytical toolbox, you'll build a powerful competitive advantage for your organization-and yourself. Explore key ideas and strategies that underlie Bayesian analysis Distinguish prior, likelihood, and posterior distributions, and compare algorithms for driving sampling inputs Use grid approximation to solve simple univariate problems, and understand its limits as parameters increase Perform complex simulations and regressions with quadratic approximation and Richard McElreath's quap function Manage text values as if they were numeric Learn today's gold-standard Bayesian sampling technique: Markov Chain Monte Carlo (MCMC) Use MCMC to optimize execution speed in high-complexity problems Discover when frequentist methods fail and Bayesian methods are essential-and when to use both in tandem
Handbook of Empirical Economics and Finance explores the latest developments in the analysis and modeling of economic and financial data. Well-recognized econometric experts discuss the rapidly growing research in economics and finance and offer insight on the future direction of these fields. Focusing on micro models, the first group of chapters describes the statistical issues involved in the analysis of econometric models with cross-sectional data often arising in microeconomics. The book then illustrates time series models that are extensively used in empirical macroeconomics and finance. The last set of chapters explores the types of panel data and spatial models that are becoming increasingly significant in analyzing complex economic behavior and policy evaluations. This handbook brings together both background material and new methodological and applied results that are extremely important to the current and future frontiers in empirical economics and finance. It emphasizes inferential issues that transpire in the analysis of cross-sectional, time series, and panel data-based empirical models in economics, finance, and related disciplines.
Originally published in 1970; with a second edition in 1989. Empirical Bayes methods use some of the apparatus of the pure Bayes approach, but an actual prior distribution is assumed to generate the data sequence. It can be estimated thus producing empirical Bayes estimates or decision rules. In this second edition, details are provided of the derivation and the performance of empirical Bayes rules for a variety of special models. Attention is given to the problem of assessing the goodness of an empirical Bayes estimator for a given set of prior data. Chapters also focus on alternatives to the empirical Bayes approach and actual applications of empirical Bayes methods.
Over the last thirty years there has been extensive use of continuous time econometric methods in macroeconomic modelling. This monograph presents the first continuous time macroeconometric model of the United Kingdom incorporating stochastic trends. Its development represents a major step forward in continuous time macroeconomic modelling. The book describes the new model in detail and, like earlier models, it is designed in such a way as to permit a rigorous mathematical analysis of its steady-state and stability properties, thus providing a valuable check on the capacity of the model to generate plausible long-run behaviour. The model is estimated using newly developed exact Gaussian estimation methods for continuous time econometric models incorporating unobservable stochastic trends. The book also includes discussion of the application of the model to dynamic analysis and forecasting.
Pathwise estimation and inference for diffusion market models discusses contemporary techniques for inferring, from options and bond prices, the market participants' aggregate view on important financial parameters such as implied volatility, discount rate, future interest rate, and their uncertainty thereof. The focus is on the pathwise inference methods that are applicable to a sole path of the observed prices and do not require the observation of an ensemble of such paths. This book is pitched at the level of senior undergraduate students undertaking research at honors year, and postgraduate candidates undertaking Master's or PhD degree by research. From a research perspective, this book reaches out to academic researchers from backgrounds as diverse as mathematics and probability, econometrics and statistics, and computational mathematics and optimization whose interest lie in analysis and modelling of financial market data from a multi-disciplinary approach. Additionally, this book is also aimed at financial market practitioners participating in capital market facing businesses who seek to keep abreast with and draw inspiration from novel approaches in market data analysis. The first two chapters of the book contains introductory material on stochastic analysis and the classical diffusion stock market models. The remaining chapters discuss more special stock and bond market models and special methods of pathwise inference for market parameter for different models. The final chapter describes applications of numerical methods of inference of bond market parameters to forecasting of short rate. Nikolai Dokuchaev is an associate professor in Mathematics and Statistics at Curtin University. His research interests include mathematical and statistical finance, stochastic analysis, PDEs, control, and signal processing. Lin Yee Hin is a practitioner in the capital market facing industry. His research interests include econometrics, non-parametric regression, and scientific computing.
A fair question to ask of an advocate of subjective Bayesianism (which the author is) is "how would you model uncertainty?" In this book, the author writes about how he has done it using real problems from the past, and offers additional comments about the context in which he was working. |
![]() ![]() You may like...
Operations And Supply Chain Management
David Collier, James Evans
Hardcover
Patterns of Economic Change by State and…
Hannah Anderson Krog
Paperback
R2,909
Discovery Miles 29 090
Quantitative statistical techniques
Swanepoel Swanepoel, Vivier Vivier, …
Paperback
![]()
Operations and Supply Chain Management
James Evans, David Collier
Hardcover
Contemporary Perspectives in Data Mining…
Kenneth D. Lawrence, Ronald K. Klimberg
Hardcover
R2,659
Discovery Miles 26 590
|