Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Business & Economics > Economics > Econometrics
This book provides the reader with user-friendly applications of normal distribution. In several variables it is called the multinormal distribution which is often handled using matrices for convenience. The author seeks to make the arguments less abstract and hence, starts with the univariate case and moves progressively toward the vector and matrix cases. The approach used in the book is a gradual one, going from one scalar variable to a vector variable and to a matrix variable. The author presents the unified aspect of normal distribution, as well as addresses several other issues, including random matrix theory in physics. Other well-known applications, such as Herrnstein and Murray's argument that human intelligence is substantially influenced by both inherited and environmental factors, will be discussed in this book. It is a better predictor of many personal dynamics - including financial income, job performance, birth out of wedlock, and involvement in crime - than are an individual's parental socioeconomic status, or education level, and deserve to be mentioned and discussed.
Advances in Austrian Economics is a research annual whose editorial policy is to publish original research articles on Austrian economics. Each volume attempts to apply the insights of Austrian economics and related approaches to topics that are of current interest in economics and cognate disciplines. Volume 21 exemplifies this focus by highlighting key research from the Austrian tradition of economics with other research traditions in economics and related areas.
"Prof. Nitis Mukhopadhyay and Prof. Partha Pratim Sengupta, who edited this volume with great attention and rigor, have certainly carried out noteworthy activities." - Giovanni Maria Giorgi, University of Rome (Sapienza) "This book is an important contribution to the development of indices of disparity and dissatisfaction in the age of globalization and social strife." - Shelemyahu Zacks, SUNY-Binghamton "It will not be an overstatement when I say that the famous income inequality index or wealth inequality index, which is most widely accepted across the globe is named after Corrado Gini (1984-1965). ... I take this opportunity to heartily applaud the two co-editors for spending their valuable time and energy in putting together a wonderful collection of papers written by the acclaimed researchers on selected topics of interest today. I am very impressed, and I believe so will be its readers." - K.V. Mardia, University of Leeds Gini coefficient or Gini index was originally defined as a standardized measure of statistical dispersion intended to understand an income distribution. It has evolved into quantifying inequity in all kinds of distributions of wealth, gender parity, access to education and health services, environmental policies, and numerous other attributes of importance. Gini Inequality Index: Methods and Applications features original high-quality peer-reviewed chapters prepared by internationally acclaimed researchers. They provide innovative methodologies whether quantitative or qualitative, covering welfare economics, development economics, optimization/non-optimization, econometrics, air quality, statistical learning, inference, sample size determination, big data science, and some heuristics. Never before has such a wide dimension of leading research inspired by Gini's works and their applicability been collected in one edited volume. The volume also showcases modern approaches to the research of a number of very talented and upcoming younger contributors and collaborators. This feature will give readers a window with a distinct view of what emerging research in this field may entail in the near future.
China's reform and opening-up have contributed to its long-term and rapid economic development, resulting in a much stronger economic strength and much better life for its people. Meanwhile, the deepening economic integration between China and the world has resulted in an increasingly complex environment, growing influencing factors and severe challenges to China's economic development. Under the "new normal" of the Chinese economy, accurate analysis of the economic situation is essential to scientific decision-making, sustainable and healthy economic development and to build a moderately prosperous society in all respects. By applying statistical and national economic accounting methods, and based on detailed statistics and national economic accounting data, this book presents an in-depth analysis of the key economic fields, such as real estate economy, automotive industry, high-tech industry, investment, opening-up, income distribution of residents, economic structure, balance of payments structure and financial operation, since the reform and opening-up, especially in recent years. It aims to depict the performance and characteristics of these key economic fields and their roles in the development of national economy, thus providing useful suggestions for economic decision-making, and facilitating the sustainable and healthy development of the economy and the realization of the goal of building a moderately prosperous society in all respects.
From the Foreword: "Big Data Management and Processing is [a] state-of-the-art book that deals with a wide range of topical themes in the field of Big Data. The book, which probes many issues related to this exciting and rapidly growing field, covers processing, management, analytics, and applications... [It] is a very valuable addition to the literature. It will serve as a source of up-to-date research in this continuously developing area. The book also provides an opportunity for researchers to explore the use of advanced computing technologies and their impact on enhancing our capabilities to conduct more sophisticated studies." ---Sartaj Sahni, University of Florida, USA "Big Data Management and Processing covers the latest Big Data research results in processing, analytics, management and applications. Both fundamental insights and representative applications are provided. This book is a timely and valuable resource for students, researchers and seasoned practitioners in Big Data fields. --Hai Jin, Huazhong University of Science and Technology, China Big Data Management and Processing explores a range of big data related issues and their impact on the design of new computing systems. The twenty-one chapters were carefully selected and feature contributions from several outstanding researchers. The book endeavors to strike a balance between theoretical and practical coverage of innovative problem solving techniques for a range of platforms. It serves as a repository of paradigms, technologies, and applications that target different facets of big data computing systems. The first part of the book explores energy and resource management issues, as well as legal compliance and quality management for Big Data. It covers In-Memory computing and In-Memory data grids, as well as co-scheduling for high performance computing applications. The second part of the book includes comprehensive coverage of Hadoop and Spark, along with security, privacy, and trust challenges and solutions. The latter part of the book covers mining and clustering in Big Data, and includes applications in genomics, hospital big data processing, and vehicular cloud computing. The book also analyzes funding for Big Data projects.
Time series econometrics is a rapidly evolving field. Particularly, the cointegration revolution has had a substantial impact on applied analysis. Hence, no textbook has managed to cover the full range of methods in current use and explain how to proceed in applied domains. This gap in the literature motivates the present volume. The methods are sketched out, reminding the reader of the ideas underlying them and giving sufficient background for empirical work. The treatment can also be used as a textbook for a course on applied time series econometrics. Topics include: unit root and cointegration analysis, structural vector autoregressions, conditional heteroskedasticity and nonlinear and nonparametric time series models. Crucial to empirical work is the software that is available for analysis. New methodology is typically only gradually incorporated into existing software packages. Therefore a flexible Java interface has been created, allowing readers to replicate the applications and conduct their own analyses.
The book provides an integrated approach to risk sharing, risk spreading and efficient regulation through principal agent models. It emphasizes the role of information asymmetry and risk sharing in contracts as an alternative to transaction cost considerations. It examines how contracting, as an institutional mechanism to conduct transactions, spreads risks while attempting consolidation. It further highlights the shifting emphasis in contracts from Coasian transaction cost saving to risk sharing and shows how it creates difficulties associated with risk spreading, and emphasizes the need for efficient regulation of contracts at various levels. Each of the chapters is structured using a principal agent model, and all chapters incorporate adverse selection (and exogenous randomness) as a result of information asymmetry, as well as moral hazard (and endogenous randomness) due to the self-interest-seeking behavior on the part of the participants.
Originally published in 1971, this is a rigorous analysis of the economic aspects of the efficiency of public enterprises at the time. The author first restates and extends the relevant parts of welfare economics, and then illustrates its application to particular cases, drawing on the work of the National Board for Prices and Incomes, of which he was Deputy Chairman. The analysis is developed stage by stage, with the emphasis on applicability and ease of comprehension, rather than on generality or mathematical elegance. Financial performance, the second-best, the optimal degree of complexity of price structures and problems of optimal quality are first discussed in a static framework. Time is next introduced, leading to a marginal cost concept derived from a multi-period optimizing model. The analysis is then related to urban transport, shipping, gas and coal. This is likely to become a standard work of more general scope than the authors earlier book on electricity supply. It rests, however, on a similar combination of economic theory and high-level experience of the real problems of public enterprises.
Advances in Econometrics is a research annual whose editorial policy is to publish original research articles that contain enough details so that economists and econometricians who are not experts in the topics will find them accessible and useful in their research. Volume 37 exemplifies this focus by highlighting key research from new developments in econometrics.
In many branches of science relevant observations are taken sequentially over time. Bayesian Analysis of Time Series discusses how to use models that explain the probabilistic characteristics of these time series and then utilizes the Bayesian approach to make inferences about their parameters. This is done by taking the prior information and via Bayes theorem implementing Bayesian inferences of estimation, testing hypotheses, and prediction. The methods are demonstrated using both R and WinBUGS. The R package is primarily used to generate observations from a given time series model, while the WinBUGS packages allows one to perform a posterior analysis that provides a way to determine the characteristic of the posterior distribution of the unknown parameters. Features Presents a comprehensive introduction to the Bayesian analysis of time series. Gives many examples over a wide variety of fields including biology, agriculture, business, economics, sociology, and astronomy. Contains numerous exercises at the end of each chapter many of which use R and WinBUGS. Can be used in graduate courses in statistics and biostatistics, but is also appropriate for researchers, practitioners and consulting statisticians. About the author Lyle D. Broemeling, Ph.D., is Director of Broemeling and Associates Inc., and is a consulting biostatistician. He has been involved with academic health science centers for about 20 years and has taught and been a consultant at the University of Texas Medical Branch in Galveston, The University of Texas MD Anderson Cancer Center and the University of Texas School of Public Health. His main interest is in developing Bayesian methods for use in medical and biological problems and in authoring textbooks in statistics. His previous books for Chapman & Hall/CRC include Bayesian Biostatistics and Diagnostic Medicine, and Bayesian Methods for Agreement.
This book provides an up-to-date series of advanced chapters on applied financial econometric techniques pertaining the various fields of commodities finance, mathematics & stochastics, international macroeconomics and financial econometrics. International Financial Markets: Volume I provides a key repository on the current state of knowledge, the latest debates and recent literature on international financial markets. Against the background of the "financialization of commodities" since the 2008 sub-primes crisis, section one contains recent contributions on commodity and financial markets, pushing the frontiers of applied econometrics techniques. The second section is devoted to exchange rate and current account dynamics in an environment characterized by large global imbalances. Part three examines the latest research in the field of meta-analysis in economics and finance. This book will be useful to students and researchers in applied econometrics; academics and students seeking convenient access to an unfamiliar area. It will also be of great interest established researchers seeking a single repository on the current state of knowledge, current debates and relevant literature.
This book provides an up-to-date series of advanced chapters on applied financial econometric techniques pertaining the various fields of commodities finance, mathematics & stochastics, international macroeconomics and financial econometrics. Financial Mathematics, Volatility and Covariance Modelling: Volume 2 provides a key repository on the current state of knowledge, the latest debates and recent literature on financial mathematics, volatility and covariance modelling. The first section is devoted to mathematical finance, stochastic modelling and control optimization. Chapters explore the recent financial crisis, the increase of uncertainty and volatility, and propose an alternative approach to deal with these issues. The second section covers financial volatility and covariance modelling and explores proposals for dealing with recent developments in financial econometrics This book will be useful to students and researchers in applied econometrics; academics and students seeking convenient access to an unfamiliar area. It will also be of great interest established researchers seeking a single repository on the current state of knowledge, current debates and relevant literature.
Who decides how official statistics are produced? Do politicians have control or are key decisions left to statisticians in independent statistical agencies? Interviews with statisticians in Australia, Canada, Sweden, the UK and the USA were conducted to get insider perspectives on the nature of decision making in government statistical administration. While the popular adage suggests there are 'lies, damned lies and statistics', this research shows that official statistics in liberal democracies are far from mistruths; they are consistently insulated from direct political interference. Yet, a range of subtle pressures and tensions exist that governments and statisticians must manage. The power over statistics is distributed differently in different countries, and this book explains why. Differences in decision-making powers across countries are the result of shifting pressures politicians and statisticians face to be credible, and the different national contexts that provide distinctive institutional settings for the production of government numbers.
This book presents selected peer-reviewed contributions from the International Conference on Time Series and Forecasting, ITISE 2018, held in Granada, Spain, on September 19-21, 2018. The first three parts of the book focus on the theory of time series analysis and forecasting, and discuss statistical methods, modern computational intelligence methodologies, econometric models, financial forecasting, and risk analysis. In turn, the last three parts are dedicated to applied topics and include papers on time series analysis in the earth sciences, energy time series forecasting, and time series analysis and prediction in other real-world problems. The book offers readers valuable insights into the different aspects of time series analysis and forecasting, allowing them to benefit both from its sophisticated and powerful theory, and from its practical applications, which address real-world problems in a range of disciplines. The ITISE conference series provides a valuable forum for scientists, engineers, educators and students to discuss the latest advances and implementations in the field of time series analysis and forecasting. It focuses on interdisciplinary and multidisciplinary research encompassing computer science, mathematics, statistics and econometrics.
Predicting foreign exchange rates has presented a long-standing challenge for economists. However, the recent advances in computational techniques, statistical methods, newer datasets on emerging market currencies, etc., offer some hope. While we are still unable to beat a driftless random walk model, there has been serious progress in the field. This book provides an in-depth assessment of the use of novel statistical approaches and machine learning tools in predicting foreign exchange rate movement. First, it offers a historical account of how exchange rate regimes have evolved over time, which is critical to understanding turning points in a historical time series. It then presents an overview of the previous attempts at modeling exchange rates, and how different methods fared during this process. At the core sections of the book, the author examines the time series characteristics of exchange rates and how contemporary statistics and machine learning can be useful in improving predictive power, compared to previous methods used. Exchange rate determination is an active research area, and this book will appeal to graduate-level students of international economics, international finance, open economy macroeconomics, and management. The book is written in a clear, engaging, and straightforward way, and will greatly improve access to this much-needed knowledge in the field.
This book primarily addresses the optimality aspects of covariate designs. A covariate model is a combination of ANOVA and regression models. Optimal estimation of the parameters of the model using a suitable choice of designs is of great importance; as such choices allow experimenters to extract maximum information for the unknown model parameters. The main emphasis of this monograph is to start with an assumed covariate model in combination with some standard ANOVA set-ups such as CRD, RBD, BIBD, GDD, BTIBD, BPEBD, cross-over, multi-factor, split-plot and strip-plot designs, treatment control designs, etc. and discuss the nature and availability of optimal covariate designs. In some situations, optimal estimations of both ANOVA and the regression parameters are provided. Global optimality and D-optimality criteria are mainly used in selecting the design. The standard optimality results of both discrete and continuous set-ups have been adapted, and several novel combinatorial techniques have been applied for the construction of optimum designs using Hadamard matrices, the Kronecker product, Rao-Khatri product, mixed orthogonal arrays to name a few.
This book is an ideal introduction for beginning students of econometrics that assumes only basic familiarity with matrix algebra and calculus. It features practical questions which can be answered using econometric methods and models. Focusing on a limited number of the most basic and widely used methods, the book reviews the basics of econometrics before concluding with a number of recent empirical case studies. The volume is an intuitive illustration of what econometricians do when faced with practical questions.
Volume 36 of Advances in Econometrics recognizes Aman Ullah's significant contributions in many areas of econometrics and celebrates his long productive career. The volume features original papers on the theory and practice of econometrics that is related to the work of Aman Ullah. Topics include nonparametric/semiparametric econometrics; finite sample econometrics; shrinkage methods; information/entropy econometrics; model specification testing; robust inference; panel/spatial models. Advances in Econometrics is a research annual whose editorial policy is to publish original research articles that contain enough details so that economists and econometricians who are not experts in the topics will find them accessible and useful in their research.
This book provides in-depth analyses on accounting methods of GDP, statistic calibers and comparative perspectives on Chinese GDP. Beginning with an exploration of international comparisons of GDP, the book introduces the theoretical backgrounds, data sources, algorithms of the exchange rate method and the purchasing power parity method and discusses the advantages, disadvantages, and the latest developments in the two methods. This book further elaborates on the reasons for the imperfections of the Chinese GDP data including limitations of current statistical techniques and the accounting system, as well as the relatively confusing statistics for the service industry. The authors then make suggestions for improvement. Finally, the authors emphasize that evaluation of a country's economy and social development should not be solely limited to GDP, but should focus more on indicators of the comprehensive national power, national welfare, and the people's livelihood. This book will be of interest to economists, China-watchers, and scholars of geopolitics.
Focuses on the assumptions underlying the algorithms rather than their statistical properties Presents cutting-edge analysis of factor models and finite mixture models. Uses a hands-on approach to examine the assumptions made by the models and when the models fail to estimate accurately Utilizes interesting real-world data sets that can be used to analyze important microeconomic problems Introduces R programming concepts throughout the book. Includes appendices that discuss many of the concepts introduced in the book, as well as measures of uncertainty in microeconometrics.
The idea that simplicity matters in science is as old as science itself, with the much cited example of Ockham's Razor. A problem with Ockham's Razor is that nearly everybody seems to accept it, but few are able to define its exact meaning and to make it operational in a non-arbitrary way. Using a multidisciplinary perspective including philosophers, mathematicians, econometricians and economists, this monograph examines simplicity by asking six questions: What is meant by simplicity? How is simplicity measured? Is there an optimum trade-off between simplicity and goodness-of-fit? What is the relation between simplicity and empirical modelling? What is the relation between simplicity and prediction? What is the connection between simplicity and convenience?
The proliferation of financial derivatives over the past decades, options in particular, has underscored the increasing importance of derivative pricing literacy among students, researchers, and practitioners. Derivative Pricing: A Problem-Based Primer demystifies the essential derivative pricing theory by adopting a mathematically rigorous yet widely accessible pedagogical approach that will appeal to a wide variety of audience. Abandoning the traditional "black-box" approach or theorists' "pedantic" approach, this textbook provides readers with a solid understanding of the fundamental mechanism of derivative pricing methodologies and their underlying theory through a diversity of illustrative examples. The abundance of exercises and problems makes the book well-suited as a text for advanced undergraduates, beginning graduates as well as a reference for professionals and researchers who need a thorough understanding of not only "how," but also "why" derivative pricing works. It is especially ideal for students who need to prepare for the derivatives portion of the Society of Actuaries Investment and Financial Markets Exam. Features Lucid explanations of the theory and assumptions behind various derivative pricing models. Emphasis on intuitions, mnemonics as well as common fallacies. Interspersed with illustrative examples and end-of-chapter problems that aid a deep understanding of concepts in derivative pricing. Mathematical derivations, while not eschewed, are made maximally accessible. A solutions manual is available for qualified instructors. The Author Ambrose Lo is currently Assistant Professor of Actuarial Science at the Department of Statistics and Actuarial Science at the University of Iowa. He received his Ph.D. in Actuarial Science from the University of Hong Kong in 2014, with dependence structures, risk measures, and optimal reinsurance being his research interests. He is a Fellow of the Society of Actuaries (FSA) and a Chartered Enterprise Risk Analyst (CERA). His research papers have been published in top-tier actuarial journals, such as ASTIN Bulletin: The Journal of the International Actuarial Association, Insurance: Mathematics and Economics, and Scandinavian Actuarial Journal.
Solve the DVA/FVA Overlap Issue and Effectively Manage Portfolio Credit Risk Counterparty Risk and Funding: A Tale of Two Puzzles explains how to study risk embedded in financial transactions between the bank and its counterparty. The authors provide an analytical basis for the quantitative methodology of dynamic valuation, mitigation, and hedging of bilateral counterparty risk on over-the-counter (OTC) derivative contracts under funding constraints. They explore credit, debt, funding, liquidity, and rating valuation adjustment (CVA, DVA, FVA, LVA, and RVA) as well as replacement cost (RC), wrong-way risk, multiple funding curves, and collateral. The first part of the book assesses today's financial landscape, including the current multi-curve reality of financial markets. In mathematical but model-free terms, the second part describes all the basic elements of the pricing and hedging framework. Taking a more practical slant, the third part introduces a reduced-form modeling approach in which the risk of default of the two parties only shows up through their default intensities. The fourth part addresses counterparty risk on credit derivatives through dynamic copula models. In the fifth part, the authors present a credit migrations model that allows you to account for rating-dependent credit support annex (CSA) clauses. They also touch on nonlinear FVA computations in credit portfolio models. The final part covers classical tools from stochastic analysis and gives a brief introduction to the theory of Markov copulas. The credit crisis and ongoing European sovereign debt crisis have shown the importance of the proper assessment and management of counterparty risk. This book focuses on the interaction and possible overlap between DVA and FVA terms. It also explores the particularly challenging issue of counterparty risk in portfolio credit modeling. Primarily for researchers and graduate students in financial mathematics, the book is also suitable for financial quants, managers in banks, CVA desks, and members of supervisory bodies.
This volume comprises the classic articles on methods of identification and estimation of simultaneous equations econometric models. It includes path-breaking contributions by Trygve Haavelmo and Tjalling Koopmans, who founded the subject and received Nobel prizes for their work. It presents original articles that developed and analysed the leading methods for estimating the parameters of simultaneous equations systems: instrumental variables, indirect least squares, generalized least squares, two-stage and three-stage least squares, and maximum likelihood. Many of the articles are not readily accessible to readers in any other form. |
You may like...
Tax Policy and Uncertainty - Modelling…
Christopher Ball, John Creedy, …
Hardcover
R2,508
Discovery Miles 25 080
Operations And Supply Chain Management
David Collier, James Evans
Hardcover
Statistics for Business and Economics…
Paul Newbold, William Carlson, …
R2,178
Discovery Miles 21 780
Statistics for Business & Economics…
James McClave, P Benson, …
Paperback
R2,304
Discovery Miles 23 040
|