Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Business & Economics > Economics > Econometrics
This book extrapolates on the Nash (1950) treatment of the bargaining problem to consider the situation where the number of bargainers may vary. The authors formulate axioms to specify how solutions should respond to such changes, and provide new characterizations of all the major solutions as well as the generalizations of these solutions.
Introduction to Financial Mathematics: Option Valuation, Second Edition is a well-rounded primer to the mathematics and models used in the valuation of financial derivatives. The book consists of fifteen chapters, the first ten of which develop option valuation techniques in discrete time, the last five describing the theory in continuous time. The first half of the textbook develops basic finance and probability. The author then treats the binomial model as the primary example of discrete-time option valuation. The final part of the textbook examines the Black-Scholes model. The book is written to provide a straightforward account of the principles of option pricing and examines these principles in detail using standard discrete and stochastic calculus models. Additionally, the second edition has new exercises and examples, and includes many tables and graphs generated by over 30 MS Excel VBA modules available on the author's webpage https://home.gwu.edu/~hdj/.
In order to make informed decisions, there are three important elements: intuition, trust, and analytics. Intuition is based on experiential learning and recent research has shown that those who rely on their "gut feelings" may do better than those who don't. Analytics, however, are important in a data-driven environment to also inform decision making. The third element, trust, is critical for knowledge sharing to take place. These three elements-intuition, analytics, and trust-make a perfect combination for decision making. This book gathers leading researchers who explore the role of these three elements in the process of decision-making.
There is no shortage of incentives to study and reduce poverty in our societies. Poverty is studied in economics and political sciences, and population surveys are an important source of information about it. The design and analysis of such surveys is principally a statistical subject matter and the computer is essential for their data compilation and processing. Focusing on The European Union Statistics on Income and Living Conditions (EU-SILC), a program of annual national surveys which collect data related to poverty and social exclusion, Statistical Studies of Income, Poverty and Inequality in Europe: Computing and Graphics in R presents a set of statistical analyses pertinent to the general goals of EU-SILC. The contents of the volume are biased toward computing and statistics, with reduced attention to economics, political and other social sciences. The emphasis is on methods and procedures as opposed to results, because the data from annual surveys made available since publication and in the near future will degrade the novelty of the data used and the results derived in this volume. The aim of this volume is not to propose specific methods of analysis, but to open up the analytical agenda and address the aspects of the key definitions in the subject of poverty assessment that entail nontrivial elements of arbitrariness. The presented methods do not exhaust the range of analyses suitable for EU-SILC, but will stimulate the search for new methods and adaptation of established methods that cater to the identified purposes.
Dependence Modeling with Copulas covers the substantial advances that have taken place in the field during the last 15 years, including vine copula modeling of high-dimensional data. Vine copula models are constructed from a sequence of bivariate copulas. The book develops generalizations of vine copula models, including common and structured factor models that extend from the Gaussian assumption to copulas. It also discusses other multivariate constructions and parametric copula families that have different tail properties and presents extensive material on dependence and tail properties to assist in copula model selection. The author shows how numerical methods and algorithms for inference and simulation are important in high-dimensional copula applications. He presents the algorithms as pseudocode, illustrating their implementation for high-dimensional copula models. He also incorporates results to determine dependence and tail properties of multivariate distributions for future constructions of copula models.
Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of stochastic processes with continuous and discontinuous paths. It also covers a wide selection of popular models in finance and insurance, from Black-Scholes to stochastic volatility to interest rate to dynamic mortality. Through its many numerical and graphical illustrations and simple, insightful examples, this book provides a deep understanding of the scope of Monte Carlo methods and their use in various financial situations. The intuitive presentation encourages readers to implement and further develop the simulation methods.
Although interest in spatial regression models has surged in recent years, a comprehensive, up-to-date text on these approaches does not exist. Filling this void, Introduction to Spatial Econometrics presents a variety of regression methods used to analyze spatial data samples that violate the traditional assumption of independence between observations. It explores a wide range of alternative topics, including maximum likelihood and Bayesian estimation, various types of spatial regression specifications, and applied modeling situations involving different circumstances. Leaders in this field, the authors clarify the often-mystifying phenomenon of simultaneous spatial dependence. By presenting new methods, they help with the interpretation of spatial regression models, especially ones that include spatial lags of the dependent variable. The authors also examine the relationship between spatiotemporal processes and long-run equilibrium states that are characterized by simultaneous spatial dependence. MATLAB (R) toolboxes useful for spatial econometric estimation are available on the authors' websites. This work covers spatial econometric modeling as well as numerous applied illustrations of the methods. It encompasses many recent advances in spatial econometric models-including some previously unpublished results.
Composed in honor of the 65th birthday of Lloyd Shapley, this volume makes accessible the large body of work that has grown out of Shapley's seminal 1953 paper. Each of the twenty essays concerns some aspect of the Shapley value.
The editors are pleased to offer the following papers to the reader
in recognition and appreciation of the contributions to our
literature made by Robert Engle and Sir Clive Granger, winners of
the 2003 Nobel Prize in Economics. The basic themes of this part of
Volume 20 of Advances in Econometrics are time varying betas of the
capital asset pricing model, analysis of predictive densities of
nonlinear models of stock returns, modelling multivariate dynamic
correlations, flexible seasonal time series models, estimation of
long-memory time series models, the application of the technique of
boosting in volatility forecasting, the use of different time
scales in GARCH modelling, out-of-sample evaluation of the Fed
Model in stock price valuation, structural change as an alternative
to long memory, the use of smooth transition auto-regressions in
stochastic volatility modelling, the analysis of the balanced-ness
of regressions analyzing Taylor-Type rules of the Fed Funds rate, a
mixture-of-experts approach for the estimation of stochastic
volatility, a modern assessment of Clives first published paper on
Sunspot activity, and a new class of models of tail-dependence in
time series subject to jumps.
In many applications of econometrics and economics, a large proportion of the questions of interest are identification. An economist may be interested in uncovering the true signal when the data could be very noisy, such as time-series spurious regression and weak instruments problems, to name a few. In this book, High-Dimensional Econometrics and Identification, we illustrate the true signal and, hence, identification can be recovered even with noisy data in high-dimensional data, e.g., large panels. High-dimensional data in econometrics is the rule rather than the exception. One of the tools to analyze large, high-dimensional data is the panel data model.High-Dimensional Econometrics and Identification grew out of research work on the identification and high-dimensional econometrics that we have collaborated on over the years, and it aims to provide an up-todate presentation of the issues of identification and high-dimensional econometrics, as well as insights into the use of these results in empirical studies. This book is designed for high-level graduate courses in econometrics and statistics, as well as used as a reference for researchers.
This book brings together presentations of some of the fundamental new research that has begun to appear in the areas of dynamic structural modeling, nonlinear structural modeling, time series modeling, nonparametric inference, and chaotic attractor inference. The contents of this volume comprise the proceedings of the third of a conference series entitled International Symposia in Economic Theory and Econometrics. This conference was held at the IC;s2 (Innovation, Creativity and Capital) Institute at the University of Texas at Austin on May 22-23, l986.
Volume 1 covers statistical methods related to unit roots, trend breaks and their interplay. Testing for unit roots has been a topic of wide interest and the author was at the forefront of this research. The book covers important topics such as the Phillips-Perron unit root test and theoretical analyses about their properties, how this and other tests could be improved, and ingredients needed to achieve better tests and the proposal of a new class of tests. Also included are theoretical studies related to time series models with unit roots and the effect of span versus sampling interval on the power of the tests. Moreover, this book deals with the issue of trend breaks and their effect on unit root tests. This research agenda fostered by the author showed that trend breaks and unit roots can easily be confused. Hence, the need for new testing procedures, which are covered.Volume 2 is about statistical methods related to structural change in time series models. The approach adopted is off-line whereby one wants to test for structural change using a historical dataset and perform hypothesis testing. A distinctive feature is the allowance for multiple structural changes. The methods discussed have, and continue to be, applied in a variety of fields including economics, finance, life science, physics and climate change. The articles included address issues of estimation, testing and/or inference in a variety of models: short-memory regressors and errors, trends with integrated and/or stationary errors, autoregressions, cointegrated models, multivariate systems of equations, endogenous regressors, long-memory series, among others. Other issues covered include the problems of non-monotonic power and the pitfalls of adopting a local asymptotic framework. Empirical analyses are provided for the US real interest rate, the US GDP, the volatility of asset returns and climate change.
The beginning of the age of artificial intelligence and machine learning has created new challenges and opportunities for data analysts, statisticians, mathematicians, econometricians, computer scientists and many others. At the root of these techniques are algorithms and methods for clustering and classifying different types of large datasets, including time series data. Time Series Clustering and Classification includes relevant developments on observation-based, feature-based and model-based traditional and fuzzy clustering methods, feature-based and model-based classification methods, and machine learning methods. It presents a broad and self-contained overview of techniques for both researchers and students. Features Provides an overview of the methods and applications of pattern recognition of time series Covers a wide range of techniques, including unsupervised and supervised approaches Includes a range of real examples from medicine, finance, environmental science, and more R and MATLAB code, and relevant data sets are available on a supplementary website
The editors are pleased to offer the following papers to the reader
in recognition and appreciation of the contributions to our
literature made by Robert Engle and Sir Clive Granger, winners of
the 2003 Nobel Prize in Economics. The basic themes of this part of
Volume 20 of Advances in Econometrics are time varying betas of the
capital asset pricing model, analysis of predictive densities of
nonlinear models of stock returns, modelling multivariate dynamic
correlations, flexible seasonal time series models, estimation of
long-memory time series models, the application of the technique of
boosting in volatility forecasting, the use of different time
scales in GARCH modelling, out-of-sample evaluation of the Fed
Model in stock price valuation, structural change as an alternative
to long memory, the use of smooth transition auto-regressions in
stochastic volatility modelling, the analysis of the balanced-ness
of regressions analyzing Taylor-Type rules of the Fed Funds rate, a
mixture-of-experts approach for the estimation of stochastic
volatility, a modern assessment of Clives first published paper on
Sunspot activity, and a new class of models of tail-dependence in
time series subject to jumps.
Pathwise estimation and inference for diffusion market models discusses contemporary techniques for inferring, from options and bond prices, the market participants' aggregate view on important financial parameters such as implied volatility, discount rate, future interest rate, and their uncertainty thereof. The focus is on the pathwise inference methods that are applicable to a sole path of the observed prices and do not require the observation of an ensemble of such paths. This book is pitched at the level of senior undergraduate students undertaking research at honors year, and postgraduate candidates undertaking Master's or PhD degree by research. From a research perspective, this book reaches out to academic researchers from backgrounds as diverse as mathematics and probability, econometrics and statistics, and computational mathematics and optimization whose interest lie in analysis and modelling of financial market data from a multi-disciplinary approach. Additionally, this book is also aimed at financial market practitioners participating in capital market facing businesses who seek to keep abreast with and draw inspiration from novel approaches in market data analysis. The first two chapters of the book contains introductory material on stochastic analysis and the classical diffusion stock market models. The remaining chapters discuss more special stock and bond market models and special methods of pathwise inference for market parameter for different models. The final chapter describes applications of numerical methods of inference of bond market parameters to forecasting of short rate. Nikolai Dokuchaev is an associate professor in Mathematics and Statistics at Curtin University. His research interests include mathematical and statistical finance, stochastic analysis, PDEs, control, and signal processing. Lin Yee Hin is a practitioner in the capital market facing industry. His research interests include econometrics, non-parametric regression, and scientific computing.
Panel data econometrics has evolved rapidly over the last decade. Dynamic panel data estimation, non-linear panel data methods and the phenomenal growth in non-stationary panel data econometrics makes this an exciting area of research in econometrics. The 11th international conference on panel data held at Texas A&M University, College Station, Texas, June 2004, witnessed about 150 participants and 100 papers on panel data. This volume includes some of the papers presented at that conference and other solicited papers that made it through the refereeing process. "Contributions to Economic Analysis" was established in 1952. The series purpose is to stimulate the international exchange of scientific information. The series includes books from all areas of macroeconomics and microeconomics.
The book describes the structure of the Keynes-Leontief Model (KLM) of Japan and discusses how the Japanese economy can overcome the long-term economic deflation that has taken place since the mid-1990s. The large-scale econometric model and its analysis have been important for planning several policy measures and examining the economic structure of a country. However, it seems that the development and maintenance of the KLM would be very costly. The book discusses how the KLM is developed and employed for the policy analyses.
It is impossible to understand modern economics without knowledge of the basic tools of gametheory and mechanism design. This book provides a graduate-level introduction to the economic modeling of strategic behavior. The goal is to teach Economics doctoral students the tools of game theory and mechanism design that all economists should know.
Exotic Betting at the Racetrack is unique as it covers the efficient-inefficient strategy to price and find profitable racetrack bets, along with handicapping that provides actual bets made by the author on essentially all of the major wagers offered at US racetracks. The book starts with efficiency, accuracy of the win odds, arbitrage, and optimal betting strategies. Examples and actual bets are shown for various wagers including win, place and show, exacta, quinella, double, trifecta, superfecta, Pick 3, 4 and 6 and rainbow pick 5 and 6. There are discussions of major races including the Breeders' Cup, Pegasus, Dubai World Cup and the US Triple Crown from 2012-2018. Dosage analysis is also described and used. An additional feature concerns great horses such as the great mares Rachel Alexandra, Zenyatta, Goldikova, Treve, Beholder and Song Bird. There is a discussion of horse ownership and a tour through arguably the world's top trainer Frederico Tesio and his stables and horses in Italy.Related Link(s)
Environmental risk directly affects the financial stability of banks since they bear the financial consequences of the loss of liquidity of the entities to which they lend and of the financial penalties imposed resulting from the failure to comply with regulations and for actions taken that are harmful to the natural environment. This book explores the impact of environmental risk on the banking sector and analyzes strategies to mitigate this risk with a special emphasis on the role of modelling. It argues that environmental risk modelling allows banks to estimate the patterns and consequences of environmental risk on their operations, and to take measures within the context of asset and liability management to minimize the likelihood of losses. An important role here is played by the environmental risk modelling methodology as well as the software and mathematical and econometric models used. It examines banks' responses to macroprudential risk, particularly from the point of view of their adaptation strategies; the mechanisms of its spread; risk management and modelling; and sustainable business models. It introduces the basic concepts, definitions, and regulations concerning this type of risk, within the context of its influence on the banking industry. The book is primarily based on a quantitative and qualitative approach and proposes the delivery of a new methodology of environmental risk management and modelling in the banking sector. As such, it will appeal to researchers, scholars, and students of environmental economics, finance and banking, sociology, law, and political sciences.
Companion Website materials: https://tzkeith.com/ Multiple Regression and Beyond offers a conceptually-oriented introduction to multiple regression (MR) analysis and structural equation modeling (SEM), along with analyses that flow naturally from those methods. By focusing on the concepts and purposes of MR and related methods, rather than the derivation and calculation of formulae, this book introduces material to students more clearly, and in a less threatening way. In addition to illuminating content necessary for coursework, the accessibility of this approach means students are more likely to be able to conduct research using MR or SEM--and more likely to use the methods wisely. This book: * Covers both MR and SEM, while explaining their relevance to one another * Includes path analysis, confirmatory factor analysis, and latent growth modeling * Makes extensive use of real-world research examples in the chapters and in the end-of-chapter exercises * Extensive use of figures and tables providing examples and illustrating key concepts and techniques New to this edition: * New chapter on mediation, moderation, and common cause * New chapter on the analysis of interactions with latent variables and multilevel SEM * Expanded coverage of advanced SEM techniques in chapters 18 through 22 * International case studies and examples * Updated instructor and student online resources
Non-market valuation has become a broadly accepted and widely practiced means of measuring the economic values of the environment and natural resources. In this book, the authors provide a guide to the statistical and econometric practices that economists employ in estimating non-market values.The authors develop the econometric models that underlie the basic methods: contingent valuation, travel cost models, random utility models and hedonic models. They analyze the measurement of non-market values as a procedure with two steps: the estimation of parameters of demand and preference functions and the calculation of benefits from the estimated models. Each of the models is carefully developed from the preference function to the behavioral or response function that researchers observe. The models are then illustrated with datasets that characterize the kinds of data researchers typically deal with. The real world data and clarity of writing in this book will appeal to environmental economists, students, researchers and practitioners in multilateral banks and government agencies.
In-depth coverage of discrete-time theory and methodology. Numerous, fully worked out examples and exercises in every chapter. Mathematically rigorous and consistent yet bridging various basic and more advanced concepts. Judicious balance of financial theory, mathematical, and computational methods. Guide to Material.
Factor Analysis and Dimension Reduction in R provides coverage, with worked examples, of a large number of dimension reduction procedures along with model performance metrics to compare them. Factor analysis in the form of principal components analysis (PCA) or principal factor analysis (PFA) is familiar to most social scientists. However, what is less familiar is understanding that factor analysis is a subset of the more general statistical family of dimension reduction methods. The social scientist's toolkit for factor analysis problems can be expanded to include the range of solutions this book presents. In addition to covering FA and PCA with orthogonal and oblique rotation, this book's coverage includes higher-order factor models, bifactor models, models based on binary and ordinal data, models based on mixed data, generalized low-rank models, cluster analysis with GLRM, models involving supplemental variables or observations, Bayesian factor analysis, regularized factor analysis, testing for unidimensionality, and prediction with factor scores. The second half of the book deals with other procedures for dimension reduction. These include coverage of kernel PCA, factor analysis with multidimensional scaling, locally linear embedding models, Laplacian eigenmaps, diffusion maps, force directed methods, t-distributed stochastic neighbor embedding, independent component analysis (ICA), dimensionality reduction via regression (DRR), non-negative matrix factorization (NNMF), Isomap, Autoencoder, uniform manifold approximation and projection (UMAP) models, neural network models, and longitudinal factor analysis models. In addition, a special chapter covers metrics for comparing model performance. Features of this book include: Numerous worked examples with replicable R code Explicit comprehensive coverage of data assumptions Adaptation of factor methods to binary, ordinal, and categorical data Residual and outlier analysis Visualization of factor results Final chapters that treat integration of factor analysis with neural network and time series methods Presented in color with R code and introduction to R and RStudio, this book will be suitable for graduate-level and optional module courses for social scientists, and on quantitative methods and multivariate statistics courses.
Providing a practical introduction to state space methods as
applied to unobserved components time series models, also known as
structural time series models, this book introduces time series
analysis using state space methodology to readers who are neither
familiar with time series analysis, nor with state space methods.
The only background required in order to understand the material
presented in the book is a basic knowledge of classical linear
regression models, of which brief review is provided to refresh the
reader's knowledge. Also, a few sections assume familiarity with
matrix algebra, however, these sections may be skipped without
losing the flow of the exposition. |
You may like...
Qualitative Techniques for Workplace…
Manish Gupta, Musarrat Shaheen, …
Hardcover
R5,615
Discovery Miles 56 150
Operations and Supply Chain Management
James Evans, David Collier
Hardcover
The Leading Indicators - A Short History…
Zachary Karabell
Paperback
Operations And Supply Chain Management
David Collier, James Evans
Hardcover
Financial and Macroeconomic…
Francis X. Diebold, Kamil Yilmaz
Hardcover
R3,524
Discovery Miles 35 240
Introductory Econometrics - A Modern…
Jeffrey Wooldridge
Hardcover
|