![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > General
This new and exciting book offers a fresh approach to quantitative finance and utilises novel features, including stereoscopic images which permit 3D visualisation of complex subjects without the need for additional tools. Offering an integrated approach to the subject, A First Course in Quantitative Finance introduces students to the architecture of complete financial markets before exploring the concepts and models of modern portfolio theory, derivative pricing and fixed income products in both complete and incomplete market settings. Subjects are organised throughout in a way that encourages a gradual and parallel learning process of both the economic concepts and their mathematical descriptions, framed by additional perspectives from classical utility theory, financial economics and behavioural finance. Suitable for postgraduate students studying courses in quantitative finance, financial engineering and financial econometrics as part of an economics, finance, econometric or mathematics program, this book contains all necessary theoretical and mathematical concepts and numerical methods, as well as the necessary programming code for porting algorithms onto a computer.
Financial data are typically characterised by a time-series and cross-sectional dimension. Accordingly, econometric modelling in finance requires appropriate attention to these two - or occasionally more than two - dimensions of the data. Panel data techniques are developed to do exactly this. This book provides an overview of commonly applied panel methods for financial applications, including popular techniques such as Fama-MacBeth estimation, one-way, two-way and interactive fixed effects, clustered standard errors, instrumental variables, and difference-in-differences. Panel Methods for Finance: A Guide to Panel Data Econometrics for Financial Applications by Marno Verbeek offers the reader: Focus on panel methods where the time dimension is relatively small A clear and intuitive exposition, with a focus on implementation and practical relevance Concise presentation, with many references to financial applications and other sources Focus on techniques that are relevant for and popular in empirical work in finance and accounting Critical discussion of key assumptions, robustness, and other issues related to practical implementation
This book presents covariance matrix estimation and related aspects of random matrix theory. It focuses on the sample covariance matrix estimator and provides a holistic description of its properties under two asymptotic regimes: the traditional one, and the high-dimensional regime that better fits the big data context. It draws attention to the deficiencies of standard statistical tools when used in the high-dimensional setting, and introduces the basic concepts and major results related to spectral statistics and random matrix theory under high-dimensional asymptotics in an understandable and reader-friendly way. The aim of this book is to inspire applied statisticians, econometricians, and machine learning practitioners who analyze high-dimensional data to apply the recent developments in their work.
Students in both social and natural sciences often seek regression methods to explain the frequency of events, such as visits to a doctor, auto accidents, or new patents awarded. This book, now in its second edition, provides the most comprehensive and up-to-date account of models and methods to interpret such data. The authors combine theory and practice to make sophisticated methods of analysis accessible to researchers and practitioners working with widely different types of data and software in areas such as applied statistics, econometrics, marketing, operations research, actuarial studies, demography, biostatistics and quantitative social sciences. The new material includes new theoretical topics, an updated and expanded treatment of cross-section models, coverage of bootstrap-based and simulation-based inference, expanded treatment of time series, multivariate and panel data, expanded treatment of endogenous regressors, coverage of quantile count regression, and a new chapter on Bayesian methods.
This book presents the reader with new operators and matrices that arise in the area of matrix calculus. The properties of these mathematical concepts are investigated and linked with zero-one matrices such as the commutation matrix. Elimination and duplication matrices are revisited and partitioned into submatrices. Studying the properties of these submatrices facilitates achieving new results for the original matrices themselves. Different concepts of matrix derivatives are presented and transformation principles linking these concepts are obtained. One of these concepts is used to derive new matrix calculus results, some involving the new operators and others the derivatives of the operators themselves. The last chapter contains applications of matrix calculus, including optimization, differentiation of log-likelihood functions, iterative interpretations of maximum likelihood estimators and a Lagrangian multiplier test for endogeneity.
This book provides a general framework for specifying, estimating, and testing time series econometric models. Special emphasis is given to estimation by maximum likelihood, but other methods are also discussed, including quasi-maximum likelihood estimation, generalized method of moments estimation, nonparametric estimation, and estimation by simulation. An important advantage of adopting the principle of maximum likelihood as the unifying framework for the book is that many of the estimators and test statistics proposed in econometrics can be derived within a likelihood framework, thereby providing a coherent vehicle for understanding their properties and interrelationships. In contrast to many existing econometric textbooks, which deal mainly with the theoretical properties of estimators and test statistics through a theorem-proof presentation, this book squarely addresses implementation to provide direct conduits between the theory and applied work.
This book provides a general framework for specifying, estimating, and testing time series econometric models. Special emphasis is given to estimation by maximum likelihood, but other methods are also discussed, including quasi-maximum likelihood estimation, generalized method of moments estimation, nonparametric estimation, and estimation by simulation. An important advantage of adopting the principle of maximum likelihood as the unifying framework for the book is that many of the estimators and test statistics proposed in econometrics can be derived within a likelihood framework, thereby providing a coherent vehicle for understanding their properties and interrelationships. In contrast to many existing econometric textbooks, which deal mainly with the theoretical properties of estimators and test statistics through a theorem-proof presentation, this book squarely addresses implementation to provide direct conduits between the theory and applied work.
Seasonality in economic time series can "obscure" movements of other components in a series that are operationally more important for economic and econometric analyses. In practice, one often prefers to work with seasonally adjusted data to assess the current state of the economy and its future course. This book presents a seasonal adjustment program called CAMPLET, an acronym of its tuning parameters, which consists of a simple adaptive procedure to extract the seasonal and the non-seasonal component from an observed series. Once this process is carried out, there will be no need to revise these components at a later stage when new observations become available. The authors describe the main features of CAMPLET, evaluate the outcomes of CAMPLET and X-13ARIMA-SEATS in a controlled simulation framework using a variety of data generating processes, and illustrate CAMPLET and X-13ARIMA-SEATS with three time series: US non-farm payroll employment, operational income of Ahold and real GDP in the Netherlands. Furthermore they show how CAMPLET performs under the COVID-19 crisis, and its attractiveness in dealing with daily data. This book appeals to scholars and students of econometrics and statistics, interested in the application of statistical methods for empirical economic modeling.
In these two volumes, a group of distinguished economists debate the way in which evidence, in particular econometric evidence, can and should be used to relate macroeconomic theories to the real world. Topics covered include the business cycle, monetary policy, economic growth, the impact of new econometric techniques, the IS-LM model, the labour market, new Keynesian macroeconomics, and the use of macroeconomics in official documents.
In these two volumes, a group of distinguished economists debate the way in which evidence, in particular econometric evidence, can and should be used to relate macroeconomic theories to the real world. Topics covered include the business cycle, monetary policy, economic growth, the impact of new econometric techniques, the IS-LM model, the labour market, new Keynesian macroeconomics, and the use of macroeconomics in official documents.
This book explains in simple settings the fundamental ideas of financial market modelling and derivative pricing, using the no-arbitrage principle. Relatively elementary mathematics leads to powerful notions and techniques - such as viability, completeness, self-financing and replicating strategies, arbitrage and equivalent martingale measures - which are directly applicable in practice. The general methods are applied in detail to pricing and hedging European and American options within the Cox-Ross-Rubinstein (CRR) binomial tree model. A simple approach to discrete interest rate models is included, which, though elementary, has some novel features. All proofs are written in a user-friendly manner, with each step carefully explained and following a natural flow of thought. In this way the student learns how to tackle new problems.
This book explores the novel uses and potentials of Data Envelopment Analysis (DEA) under big data. These areas are of widespread interest to researchers and practitioners alike. Considering the vast literature on DEA, one could say that DEA has been and continues to be, a widely used technique both in performance and productivity measurement, having covered a plethora of challenges and debates within the modelling framework.
Econophysics is an emerging interdisciplinary field that takes advantage of the concepts and methods of statistical physics to analyse economic phenomena. This book expands the explanatory scope of econophysics to the real economy by using methods from statistical physics to analyse the success and failure of companies. Using large data sets of companies and income-earners in Japan and Europe, a distinguished team of researchers show how these methods allow us to analyse companies, from huge corporations to small firms, as heterogeneous agents interacting at multiple layers of complex networks. They then show how successful this approach is in explaining a wide range of recent findings relating to the dynamics of companies. With mathematics kept to a minimum, the book is not only a lively introduction to the field of econophysics but also provides fresh insights into company behaviour.
This book addresses the underlying foundational elements, both theoretical and methodological, of sponsored search. As such, the contents are less affected by the ever-changing implementation aspects of technology. Rather than focusing on the how, this book examines what causes the how. Why do certain keywords work, while others do not? Why does that ad work well, when others that are similar do not? Why does a key phrase cost a given amount? Why do we measure what we do in keyword advertising? This book speaks to that curiosity to understand why we do what we do in sponsored search. The content flows through the major components of any sponsored search effort, regardless of the underlying technology or client or product. The book addresses keywords, ads, consumers, pricing, competitors, analytics, branding, marketing, and advertising, integrating these separate components into an incorporated whole. The focus is on the critical elements, with ample illustrations, and with enough detail to lead the interested reader to further inquiry.
This book analyses the dynamics of Indian stock market with a special emphasis during the period following emergence of Covid-19. Coming from the instability in stock market following Covid-19, it delves deeper into the dynamics and unfolds the causal relationship between various economic fundamentals and the stock prices. Observing short-term herding in the stock market following Covid-19, the book's finding suggests that investors in the Indian stock market made investment choices irrationally during Covid-19 crisis periods. It also showcases how the stock market became inefficient following the emergence of pandemic and did not follow the fundamentals. Interestingly, the findings suggest no relationship between stock returns and real economic activities in India. The format of presentation makes the book well suited not only for students, academics, policy makers and investors in the stock markets, but also people engaged or interested in business and finance. The book would thus be of interest to both specialists and the laity. Analysis contained in this book will help different readership groups in different ways. Researchers from economics and finance disciplines will be able to learn about frontiers in the theoretical paradigms discussed in the book; advanced econometric techniques applied in the book will also be useful for their own research. The macroeconomic insights, and insights from behavioural economics, can expand the knowledge of corporate sector, useful in making real life decisions. Finally, it will help policy makers, like SEBI (Securities and Exchange Board of India), to formulate appropriate regulatory policies so as to minimize possibility of speculative bubbles as experienced during the pandemic period in the Indian stock markets.
This handbook presents emerging research exploring the theoretical and practical aspects of econometric techniques for the financial sector and their applications in economics. By doing so, it offers invaluable tools for predicting and weighing the risks of multiple investments by incorporating data analysis. Throughout the book the authors address a broad range of topics such as predictive analysis, monetary policy, economic growth, systemic risk and investment behavior. This book is a must-read for researchers, scholars and practitioners in the field of economics who are interested in a better understanding of current research on the application of econometric methods to financial sector data.
The search for symmetry is part of the fundamental scientific paradigm in mathematics and physics. Can this be valid also for economics? This book represents an attempt to explore this possibility. The behavior of price-taking producers, monopolists, monopsonists, sectoral market equilibria, behavior under risk and uncertainty, and two-person zero- and non-zero-sum games are analyzed and discussed under the unifying structure called the linear complementarity problem. Furthermore, the equilibrium problem allows for the relaxation of often-stated but unnecessary assumptions. This unifying approach offers the advantage of a better understanding of the structure of economic models. It also introduces the simplest and most elegant algorithm for solving a wide class of problems.
This textbook provides a self-contained presentation of the theory and models of time series analysis. Putting an emphasis on weakly stationary processes and linear dynamic models, it describes the basic concepts, ideas, methods and results in a mathematically well-founded form and includes numerous examples and exercises. The first part presents the theory of weakly stationary processes in time and frequency domain, including prediction and filtering. The second part deals with multivariate AR, ARMA and state space models, which are the most important model classes for stationary processes, and addresses the structure of AR, ARMA and state space systems, Yule-Walker equations, factorization of rational spectral densities and Kalman filtering. Finally, there is a discussion of Granger causality, linear dynamic factor models and (G)ARCH models. The book provides a solid basis for advanced mathematics students and researchers in fields such as data-driven modeling, forecasting and filtering, which are important in statistics, control engineering, financial mathematics, econometrics and signal processing, among other subjects.
As real estate forms a significant part of the asset portfolios of most investors and lenders, it is crucial that analysts and institutions employ sound techniques for modelling and forecasting the performance of real estate assets. Assuming no prior knowledge of econometrics, this book introduces and explains a broad range of quantitative techniques that are relevant for the analysis of real estate data. It includes numerous detailed examples, giving readers the confidence they need to estimate and interpret their own models. Throughout, the book emphasises how various statistical techniques may be used for forecasting and shows how forecasts can be evaluated. Written by a highly experienced teacher of econometrics and a senior real estate professional, both of whom are widely known for their research, Real Estate Modelling and Forecasting is the first book to provide a practical introduction to the econometric analysis of real estate for students and practitioners.
The aim of this publication is to identify and apply suitable methods for analysing and predicting the time series of gold prices, together with acquainting the reader with the history and characteristics of the methods and with the time series issues in general. Both statistical and econometric methods, and especially artificial intelligence methods, are used in the case studies. The publication presents both traditional and innovative methods on the theoretical level, always accompanied by a case study, i.e. their specific use in practice. Furthermore, a comprehensive comparative analysis of the individual methods is provided. The book is intended for readers from the ranks of academic staff, students of universities of economics, but also the scientists and practitioners dealing with the time series prediction. From the point of view of practical application, it could provide useful information for speculators and traders on financial markets, especially the commodity markets.
This book provides a comprehensive and concrete illustration of time series analysis focusing on the state-space model, which has recently attracted increasing attention in a broad range of fields. The major feature of the book lies in its consistent Bayesian treatment regarding whole combinations of batch and sequential solutions for linear Gaussian and general state-space models: MCMC and Kalman/particle filter. The reader is given insight on flexible modeling in modern time series analysis. The main topics of the book deal with the state-space model, covering extensively, from introductory and exploratory methods to the latest advanced topics such as real-time structural change detection. Additionally, a practical exercise using R/Stan based on real data promotes understanding and enhances the reader's analytical capability.
This second edition presents the advances made in finance market analysis since 2005. The book provides a careful introduction to stochastic methods along with approximate ensembles for a single, historic time series. The new edition explains the history leading up to the biggest economic disaster of the 21st century. Empirical evidence for finance market instability under deregulation is given, together with a history of the explosion of the US Dollar worldwide. A model shows how bounds set by a central bank stabilized FX in the gold standard era, illustrating the effect of regulations. The book presents economic and finance theory thoroughly and critically, including rational expectations, cointegration and arch/garch methods, and replaces several of those misconceptions by empirically based ideas. This book will be of interest to finance theorists, traders, economists, physicists and engineers, and leads the reader to the frontier of research in time series analysis.
This book describes the new generation of discrete choice methods, focusing on the many advances that are made possible by simulation. Researchers use these statistical methods to examine the choices that consumers, households, firms, and other agents make. Each of the major models is covered: logit, generalized extreme value, or GEV (including nested and cross-nested logits), probit, and mixed logit, plus a variety of specifications that build on these basics. Simulation-assisted estimation procedures are investigated and compared, including maximum stimulated likelihood, method of simulated moments, and method of simulated scores. Procedures for drawing from densities are described, including variance reduction techniques such as anithetics and Halton draws. Recent advances in Bayesian procedures are explored, including the use of the Metropolis-Hastings algorithm and its variant Gibbs sampling. The second edition adds chapters on endogeneity and expectation-maximization (EM) algorithms. No other book incorporates all these fields, which have arisen in the past 25 years. The procedures are applicable in many fields, including energy, transportation, environmental studies, health, labor, and marketing.
This book describes the new generation of discrete choice methods, focusing on the many advances that are made possible by simulation. Researchers use these statistical methods to examine the choices that consumers, households, firms, and other agents make. Each of the major models is covered: logit, generalized extreme value, or GEV (including nested and cross-nested logits), probit, and mixed logit, plus a variety of specifications that build on these basics. Simulation-assisted estimation procedures are investigated and compared, including maximum stimulated likelihood, method of simulated moments, and method of simulated scores. Procedures for drawing from densities are described, including variance reduction techniques such as anithetics and Halton draws. Recent advances in Bayesian procedures are explored, including the use of the Metropolis-Hastings algorithm and its variant Gibbs sampling. The second edition adds chapters on endogeneity and expectation-maximization (EM) algorithms. No other book incorporates all these fields, which have arisen in the past 25 years. The procedures are applicable in many fields, including energy, transportation, environmental studies, health, labor, and marketing. |
You may like...
Financial and Macroeconomic…
Francis X. Diebold, Kamil Yilmaz
Hardcover
R3,567
Discovery Miles 35 670
The Oxford Handbook of the Economics of…
Yann Bramoulle, Andrea Galeotti, …
Hardcover
R5,455
Discovery Miles 54 550
Linear and Non-Linear Financial…
Mehmet Kenan Terzioglu, Gordana Djurovic
Hardcover
R3,581
Discovery Miles 35 810
Handbook of Field Experiments, Volume 1
Esther Duflo, Abhijit Banerjee
Hardcover
R3,497
Discovery Miles 34 970
Ranked Set Sampling - 65 Years Improving…
Carlos N. Bouza-Herrera, Amer Ibrahim Falah Al-Omari
Paperback
Design and Analysis of Time Series…
Richard McCleary, David McDowall, …
Hardcover
R3,286
Discovery Miles 32 860
Introduction to Computational Economics…
Hans Fehr, Fabian Kindermann
Hardcover
R4,258
Discovery Miles 42 580
|