![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics
This book studies the information spillover among financial markets and explores the intraday effect and ACD models with high frequency data. This book also contributes theoretically by providing a new statistical methodology with comparative advantages for analyzing co-movements between two time series. It explores this new method by testing the information spillover between the Chinese stock market and the international market, futures market and spot market. Using the high frequency data, this book investigates the intraday effect and examines which type of ACD model is particularly suited in capturing financial duration dynamics. The book will be of invaluable use to scholars and graduate students interested in co-movements among different financial markets and financial market microstructure and to investors and regulation departments looking to improve their risk management.
This title, first published in 1979, presents the Ph.D. thesis of the world-renowned economist and financial expert, Willem Buiter. In Part I, three alternative specifications of temporary equilibria in asset markets, including their implications for macroeconomic models, are discussed; Part II examines the long-term implications of some short-term macroeconomic models. The analysis of the theoretical foundations of 'direct crowding out' and 'indirect crowding out' is particularly prominent, with the result that a synthesis of short-term macroeconomic analysis and long-term growth theory is formulated. The traditional tools of comparative dynamics and stability analysis are employed frequently. However, it is also argued that the true scope of government policy can only be adequately evaluated with the aid of concepts such as dynamic and static controllability. Temporary Equilibrium and Long-Run Equilibrium is a valuable study, and relevant for all serious students of modern economic theory.
First published in 1994. Concepts of probability are an integral component of economic theory. However there are a wide range of theories of probability and these are manifested in different approaches to economic theory itself. In this book Charles McCann, Jr provides a clear and informative survey of the area which serves to standardize terminology and so integrate probability into a discussion of the foundations of economic theory. This is illustrated by examples from Austrian, Keynesian and New Classical Economics.
The 'Advances in Econometrics' series aims to publish annual original scholarly econometrics papers on designated topics with the intention of expanding the use of developed and emerging econometric techniques by disseminating ideas on the theory and practice of econometrics throughout the empirical economic, business and social science literature.
First Published in 1970. Econometric model-building, on the other hand, has been largely confined to the advanced industrialised countries. In the few cases where macro-models have been built for underdeveloped countries (e.g. the Narasimham model (112) for India) the underlying assumptions have been largely of the Keynesian type, and thus in the authors opinion unconnected with the theory of economic development. This study is a modest attempt at econometric model-building on the basis of a model of development of an underdeveloped country.
The main objective of politicians is to maximise economic growth, which heavily drives political policy and decision-making. Critics of the maximisation of growth as the central aim of economic policy have argued that growth in itself is not necessarily a good thing, particularly for the environment; however, what would replace the system and how it would be measured are questions that have been rarely answered satisfactorily. First published in 1991, this book was the first to lay out an entirely new set of practical proposals for developing new economic measurement tools, with the aim of being sustainable, 'green' and human-centred. Victor Anderson proposes that a whole set of indicators, rather than a single one, should play all the roles that GNP (Gross National Product) is responsible for. With a detailed overview of the central debates between the advocates and opponents of continued economic growth and an analysis of the various proposals for modification, this title will be of particular value to students interested in the diversity of measurement tools and the notion that economies should also be evaluated by their social and environmental consequences.
'Big data' is now readily available to economic historians, thanks to the digitisation of primary sources, collaborative research linking different data sets, and the publication of databases on the internet. Key economic indicators, such as the consumer price index, can be tracked over long periods, and qualitative information, such as land use, can be converted to a quantitative form. In order to fully exploit these innovations it is necessary to use sophisticated statistical techniques to reveal the patterns hidden in datasets, and this book shows how this can be done. A distinguished group of economic historians have teamed up with younger researchers to pilot the application of new techniques to 'big data'. Topics addressed in this volume include prices and the standard of living, money supply, credit markets, land values and land use, transport, technological innovation, and business networks. The research spans the medieval, early modern and modern periods. Research methods include simultaneous equation systems, stochastic trends and discrete choice modelling. This book is essential reading for doctoral and post-doctoral researchers in business, economic and social history. The case studies will also appeal to historical geographers and applied econometricians.
Portfolio theory and much of asset pricing, as well as many empirical applications, depend on the use of multivariate probability distributions to describe asset returns. Traditionally, this has meant the multivariate normal (or Gaussian) distribution. More recently, theoretical and empirical work in financial economics has employed the multivariate Student (and other) distributions which are members of the elliptically symmetric class. There is also a growing body of work which is based on skew-elliptical distributions. These probability models all exhibit the property that the marginal distributions differ only by location and scale parameters or are restrictive in other respects. Very often, such models are not supported by the empirical evidence that the marginal distributions of asset returns can differ markedly. Copula theory is a branch of statistics which provides powerful methods to overcome these shortcomings. This book provides a synthesis of the latest research in the area of copulae as applied to finance and related subjects such as insurance. Multivariate non-Gaussian dependence is a fact of life for many problems in financial econometrics. This book describes the state of the art in tools required to deal with these observed features of financial data. This book was originally published as a special issue of the European Journal of Finance.
In the twentieth century, Americans thought of the United States as a land of opportunity and equality. To what extent and for whom this was true was, of course, a matter of debate, however especially during the Cold War, many Americans clung to the patriotic conviction that America was the land of the free. At the same time, another national ideal emerged that was far less contentious, that arguably came to subsume the ideals of freedom, opportunity, and equality, and that eventually embodied an unspoken consensus about what constitutes the good society in a postmodern setting. This was the ideal of choice, broadly understood as the proposition that the good society provides individuals with the power to shape the contours of their lives in ways that suit their personal interests, idiosyncrasies, and tastes. By the closing decades of the century, Americans were widely agreed that theirs was-or at least should be-the land of choice. In A Destiny of Choice?, David Blanke and David Steigerwald bring together important scholarship on the tension between two leading interpretations of modern American consumer culture. That modern consumerism reflects the social, cultural, economic, and political changes that accompanied the country's transition from a local, producer economy dominated by limited choices and restricted credit to a national consumer marketplace based on the individual selection of mass-produced, mass-advertised, and mass-distributed goods. This debate is central to the economic difficulties seen in the United States today.
This book presents models and statistical methods for the analysis of recurrent event data. The authors provide broad, detailed coverage of the major approaches to analysis, while emphasizing the modeling assumptions that they are based on. More general intensity-based models are also considered, as well as simpler models that focus on rate or mean functions. Parametric, nonparametric and semiparametric methodologies are all covered, with procedures for estimation, testing and model checking.
Economic Time Series: Modeling and Seasonality is a focused resource on analysis of economic time series as pertains to modeling and seasonality, presenting cutting-edge research that would otherwise be scattered throughout diverse peer-reviewed journals. This compilation of 21 chapters showcases the cross-fertilization between the fields of time series modeling and seasonal adjustment, as is reflected both in the contents of the chapters and in their authorship, with contributors coming from academia and government statistical agencies. For easier perusal and absorption, the contents have been grouped into seven topical sections: Section I deals with periodic modeling of time series, introducing, applying, and comparing various seasonally periodic models Section II examines the estimation of time series components when models for series are misspecified in some sense, and the broader implications this has for seasonal adjustment and business cycle estimation Section III examines the quantification of error in X-11 seasonal adjustments, with comparisons to error in model-based seasonal adjustments Section IV discusses some practical problems that arise in seasonal adjustment: developing asymmetric trend-cycle filters, dealing with both temporal and contemporaneous benchmark constraints, detecting trading-day effects in monthly and quarterly time series, and using diagnostics in conjunction with model-based seasonal adjustment Section V explores outlier detection and the modeling of time series containing extreme values, developing new procedures and extending previous work Section VI examines some alternative models and inference procedures for analysis of seasonal economic time series Section VII deals with aspects of modeling, estimation, and forecasting for nonseasonal economic time series By presenting new methodological developments as well as pertinent empirical analyses and reviews of established methods, the book provides much that is stimulating and practically useful for the serious researcher and analyst of economic time series.
This book constitutes the first serious attempt to explain the basics of econometrics and its applications in the clearest and simplest manner possible. Recognising the fact that a good level of mathematics is no longer a necessary prerequisite for economics/financial economics undergraduate and postgraduate programmes, it introduces this key subdivision of economics to an audience who might otherwise have been deterred by its complex nature.
With a new author team contributing decades of practical experience, this fully updated and thoroughly classroom-tested second edition textbook prepares students and practitioners to create effective forecasting models and master the techniques of time series analysis. Taking a practical and example-driven approach, this textbook summarises the most critical decisions, techniques and steps involved in creating forecasting models for business and economics. Students are led through the process with an entirely new set of carefully developed theoretical and practical exercises. Chapters examine the key features of economic time series, univariate time series analysis, trends, seasonality, aberrant observations, conditional heteroskedasticity and ARCH models, non-linearity and multivariate time series, making this a complete practical guide. A companion website with downloadable datasets, exercises and lecture slides rounds out the full learning package.
"Bayesian Econometrics" illustrates the scope and diversity of modern applications, reviews some recent advances, and highlights many desirable aspects of inference and computations. It begins with an historical overview by Arnold Zellner who describes key contributions to development and makes predictions for future directions. In the second paper, Giordani and Kohn makes suggestions for improving Markov chain Monte Carlo computational strategies. The remainder of the book is categorized according to microeconometric and time-series modeling. Models considered include an endogenous selection ordered probit model, a censored treatment-response model, equilibrium job search models and various other types. These are used to study a variety of applications for example dental insurance and care, educational attainment, voter opinions and the marketing share of various brands and an aggregate cross-section production function. Models and topics considered include the potential problem of improper posterior densities in a variety of dynamic models, selection and averaging for forecasting with vector autoregressions, a consumption capital-asset pricing model and various others. Applications involve U.S. macroeconomic variables, exchange rates, an investigation of purchasing power parity, data from London Metals Exchange, international automobile production data, and data from the Asian stock market.
A classic text for accuracy and statistical precision. Statistics for Business and Economics enables readers to conduct serious analysis of applied problems rather than running simple "canned" applications. This text is also at a mathematically higher level than most business statistics texts and provides readers with the knowledge they need to become stronger analysts for future managerial positions. The eighth edition of this book has been revised and updated to provide readers with improved problem contexts for learning how statistical methods can improve their analysis and understanding of business and economics.
This book covers the basics of processing and spectral analysis of monovariate discrete-time signals. The approach is practical, the aim being to acquaint the reader with the indications for and drawbacks of the various methods and to highlight possible misuses. The book is rich in original ideas, visualized in new and illuminating ways, and is structured so that parts can be skipped without loss of continuity. Many examples are included, based on synthetic data and real measurements from the fields of physics, biology, medicine, macroeconomics etc., and a complete set of MATLAB exercises requiring no previous experience of programming is provided. Prior advanced mathematical skills are not needed in order to understand the contents: a good command of basic mathematical analysis is sufficient. Where more advanced mathematical tools are necessary, they are included in an Appendix and presented in an easy-to-follow way. With this book, digital signal processing leaves the domain of engineering to address the needs of scientists and scholars in traditionally less quantitative disciplines, now facing increasing amounts of data.
Upon the backdrop of impressive progress made by the Indian economy during the last two decades after the large-scale economic reforms in the early 1990s, this book evaluates the performance of the economy on some income and non-income dimensions of development at the national, state and sectoral levels. It examines regional economic growth and inequality in income originating from agriculture, industry and services. In view of the importance of the agricultural sector, despite its declining share in gross domestic product, it evaluates the performance of agricultural production and the impact of agricultural reforms on spatial integration of food grain markets. It studies rural poverty, analyzing the trend in employment, the trickle-down process and the inclusiveness of growth in rural India. It also evaluates the impact of microfinance, as an instrument of financial inclusion, on the socio-economic conditions of rural households. Lastly, it examines the relative performance of fifteen major states of India in terms of education, health and human development. An important feature of the book is that it approaches these issues, applying rigorously advanced econometric methods, and focusing primarily on their regional disparities during the post-reform period vis-a-vis the pre-reform period. It offers important results to guide policies for future development.
Heavy tails -extreme events or values more common than expected -emerge everywhere: the economy, natural events, and social and information networks are just a few examples. Yet after decades of progress, they are still treated as mysterious, surprising, and even controversial, primarily because the necessary mathematical models and statistical methods are not widely known. This book, for the first time, provides a rigorous introduction to heavy-tailed distributions accessible to anyone who knows elementary probability. It tackles and tames the zoo of terminology for models and properties, demystifying topics such as the generalized central limit theorem and regular variation. It tracks the natural emergence of heavy-tailed distributions from a wide variety of general processes, building intuition. And it reveals the controversy surrounding heavy tails to be the result of flawed statistics, then equips readers to identify and estimate with confidence. Over 100 exercises complete this engaging package.
Building on the strength of the first edition, Quantitative Methods for Business and Economics provides a simple introduction to the mathematical and statistical techniques needed in business. This book is accessible and easy to use, with the emphasis clearly on how to apply quantitative techniques to business situations. It includes numerous real world applications and many opportunities for student interaction. It is clearly focused on business, management and economics students taking a single module in Quantitative Methods.
This volume of Advances in Econometrics contains articles that examine key topics in the modeling and estimation of dynamic stochastic general equilibrium (DSGE) models. Because DSGE models combine micro- and macroeconomic theory with formal econometric modeling and inference, over the past decade they have become an established framework for analyzing a variety of issues in empirical macroeconomics. The research articles make contributions in several key areas in DSGE modeling and estimation. In particular, papers cover the modeling and role of expectations, the study of optimal monetary policy in two-country models, and the problem of non-invertibility. Other interesting areas of inquiry include the analysis of parameter identification in new open economy macroeconomic models and the modeling of trend inflation shocks. The second part of the volume is devoted to articles that offer innovations in econometric methodology. These papers advance new techniques for addressing major inferential problems and include discussion and applications of Laplace-type, frequency domain, empirical likelihood and method of moments estimators.
The chapters in this book describe various aspects of the application of statistical methods in finance. It will interest and attract statisticians to this area, illustrate some of the many ways that statistical tools are used in financial applications, and give some indication of problems which are still outstanding. The statisticians will be stimulated to learn more about the kinds of models and techniques outlined in the book - both the domain of finance and the science of statistics will benefit from increased awareness by statisticians of the problems, models, and techniques applied in financial applications. For this reason, extensive references are given. The level of technical detail varies between the chapters. Some present broad non-technical overviews of an area, while others describe the mathematical niceties. This illustrates both the range of possibilities available in the area for statisticians, while simultaneously giving a flavour of the different kinds of mathematical and statistical skills required. Whether you favour data analysis or mathematical manipulation, if you are a statistician there are problems in finance which are appropriate to your skills.
Across the social sciences there has been increasing focus on reproducibility, i.e., the ability to examine a study's data and methods to ensure accuracy by reproducing the study. Reproducible Econometrics Using R combines an overview of key issues and methods with an introduction to how to use them using open source software (R) and recently developed tools (R Markdown and bookdown) that allow the reader to engage in reproducible econometric research. Jeffrey S. Racine provides a step-by-step approach, and covers five sets of topics, i) linear time series models, ii) robust inference, iii) robust estimation, iv) model uncertainty, and v) advanced topics. The time series material highlights the difference between time-series analysis, which focuses on forecasting, versus cross-sectional analysis, where the focus is typically on model parameters that have economic interpretations. For the time series material, the reader begins with a discussion of random walks, white noise, and non-stationarity. The reader is next exposed to the pitfalls of using standard inferential procedures that are popular in cross sectional settings when modelling time series data, and is introduced to alternative procedures that form the basis for linear time series analysis. For the robust inference material, the reader is introduced to the potential advantages of bootstrapping and the Jackknifing versus the use of asymptotic theory, and a range of numerical approaches are presented. For the robust estimation material, the reader is presented with a discussion of issues surrounding outliers in data and methods for addressing their presence. Finally, the model uncertainly material outlines two dominant approaches for dealing with model uncertainty, namely model selection and model averaging. Throughout the book there is an emphasis on the benefits of using R and other open source tools for ensuring reproducibility. The advanced material covers machine learning methods (support vector machines that are useful for classification) and nonparametric kernel regression which provides the reader with more advanced methods for confronting model uncertainty. The book is well suited for advanced undergraduate and graduate students alike. Assignments, exams, slides, and a solution manual are available for instructors.
Sociological theories of crime include: theories of strain blame crime on personal stressors; theories of social learning blame crime on its social rewards, and see crime more as an institution in conflict with other institutions rather than as in- vidual deviance; and theories of control look at crime as natural and rewarding, and explore the formation of institutions that control crime. Theorists of corruption generally agree that corruption is an expression of the Patron-Client relationship in which a person with access to resources trades resources with kin and members of the community in exchange for loyalty. Some approaches to modeling crime and corruption do not involve an explicit simulation: rule based systems; Bayesian networks; game theoretic approaches, often based on rational choice theory; and Neoclassical Econometrics, a rational choice-based approach. Simulation-based approaches take into account greater complexities of interacting parts of social phenomena. These include fuzzy cognitive maps and fuzzy rule sets that may incorporate feedback; and agent-based simulation, which can go a step farther by computing new social structures not previously identified in theory. The latter include cognitive agent models, in which agents learn how to perceive their en- ronment and act upon the perceptions of their individual experiences; and reactive agent simulation, which, while less capable than cognitive-agent simulation, is adequate for testing a policy's effects with existing societal structures. For example, NNL is a cognitive agent model based on the REPAST Simphony toolkit. |
You may like...
Leatherpress Twilight Grey Large Genuine…
Leatherpress
Leather / fine binding
|