![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics
The productivity of a business exerts an important influence on its financial performance. A similar influence exists for industries and economies: those with superior productivity performance thrive at the expense of others. Productivity performance helps explain the growth and demise of businesses and the relative prosperity of nations. Productivity Accounting: The Economics of Business Performance offers an in-depth analysis of variation in business performance, providing the reader with an analytical framework within which to account for this variation and its causes and consequences. The primary focus is the individual business, and the principal consequence of business productivity performance is business financial performance. Alternative measures of financial performance are considered, including profit, profitability, cost, unit cost, and return on assets. Combining analytical rigor with empirical illustrations, the analysis draws on wide-ranging literatures, both historical and current, from business and economics, and explains how businesses create value and distribute it.
The series is designed to bring together those mathematicians who are seriously interested in getting new challenging stimuli from economic theories with those economists who are seeking effective mathematical tools for their research. A lot of economic problems can be formulated as constrained optimizations and equilibration of their solutions. Various mathematical theories have been supplying economists with indispensable machineries for these problems arising in economic theory. Conversely, mathematicians have been stimulated by various mathematical difficulties raised by economic theories.
The research and its outcomes presented here focus on spatial sampling of agricultural resources. The authors introduce sampling designs and methods for producing accurate estimates of crop production for harvests across different regions and countries. With the help of real and simulated examples performed with the open-source software R, readers will learn about the different phases of spatial data collection. The agricultural data analyzed in this book help policymakers and market stakeholders to monitor the production of agricultural goods and its effects on environment and food safety.
This book treats the latest developments in the theory of order-restricted inference, with special attention to nonparametric methods and algorithmic aspects. Among the topics treated are current status and interval censoring models, competing risk models, and deconvolution. Methods of order restricted inference are used in computing maximum likelihood estimators and developing distribution theory for inverse problems of this type. The authors have been active in developing these tools and present the state of the art and the open problems in the field. The earlier chapters provide an introduction to the subject, while the later chapters are written with graduate students and researchers in mathematical statistics in mind. Each chapter ends with a set of exercises of varying difficulty. The theory is illustrated with the analysis of real-life data, which are mostly medical in nature.
Handbook of Econometrics, Volume 7A, examines recent advances in foundational issues and "hot" topics within econometrics, such as inference for moment inequalities and estimation of high dimensional models. With its world-class editors and contributors, it succeeds in unifying leading studies of economic models, mathematical statistics and economic data. Our flourishing ability to address empirical problems in economics by using economic theory and statistical methods has driven the field of econometrics to unimaginable places. By designing methods of inference from data based on models of human choice behavior and social interactions, econometricians have created new subfields now sufficiently mature to require sophisticated literature summaries.
This is a book on deterministic and stochastic Growth Theory and the computational methods needed to produce numerical solutions. Exogenous and endogenous growth models are thoroughly reviewed. Special attention is paid to the use of these models for fiscal and monetary policy analysis. Modern Business Cycle Theory, the New Keynesian Macroeconomics, the class of Dynamic Stochastic General Equilibrium models, can be all considered as special cases of models of economic growth, and they can be analyzed by the theoretical and numerical procedures provided in the textbook. Analytical discussions are presented in full detail. The book is self contained and it is designed so that the student advances in the theoretical and the computational issues in parallel. EXCEL and Matlab files are provided on an accompanying website (see Preface to the Second Edition) to illustrate theoretical results as well as to simulate the effects of economic policy interventions. The structure of these program files is described in "Numerical exercise"-type of sections, where the output of these programs is also interpreted. The second edition corrects a few typographical errors and improves some notation.
Increasing concerns regarding the world's natural resources and sustainability continue to be a major issue for global development. As a result several political initiatives and strategies for green or resource-efficient growth both on national and international levels have been proposed. A core element of these initiatives is the promotion of an increase of resource or material productivity. This dissertation examines material productivity developments in the OECD and BRICS countries between 1980 and 2008. By applying the concept of convergence stemming from economic growth theory to material productivity the analysis provides insights into both aspects: material productivity developments in general as well potentials for accelerated improvements in material productivity which consequently may allow a reduction of material use globally. The results of the convergence analysis underline the importance of policy-making with regard to technology and innovation policy enabling the production of resource-efficient products and services as well as technology transfer and diffusion.
The recent financial crisis has heightened the need for appropriate methodologies for managing and monitoring complex risks in financial markets. The measurement, management, and regulation of risks in portfolios composed of credits, credit derivatives, or life insurance contracts is difficult because of the nonlinearities of risk models, dependencies between individual risks, and the several thousands of contracts in large portfolios. The granularity principle was introduced in the Basel regulations for credit risk to solve these difficulties in computing capital reserves. In this book, authors Patrick Gagliardini and Christian Gourieroux provide the first comprehensive overview of the granularity theory and illustrate its usefulness for a variety of problems related to risk analysis, statistical estimation, and derivative pricing in finance and insurance. They show how the granularity principle leads to analytical formulas for risk analysis that are simple to implement and accurate even when the portfolio size is large."
To what extent should anybody who has to make model forecasts generated from detailed data analysis adjust their forecasts based on their own intuition? In this book, Philip Hans Franses, one of Europe's leading econometricians, presents the notion that many publicly available forecasts have experienced an 'expert's touch', and questions whether this type of intervention is useful and if a lighter adjustment would be more beneficial. Covering an extensive research area, this accessible book brings together current theoretical insights and new empirical results to examine expert adjustment from an econometric perspective. The author's analysis is based on a range of real forecasts and the datasets upon which the forecasters relied. The various motivations behind experts' modifications are considered, and guidelines for creating more useful and reliable adjusted forecasts are suggested. This book will appeal to academics and practitioners with an interest in forecasting methodology.
The purpose of this book is to establish a connection between the traditional field of empirical economic research and the emerging area of empirical financial research and to build a bridge between theoretical developments in these areas and their application in practice. Accordingly, it covers broad topics in the theory and application of both empirical economic and financial research, including analysis of time series and the business cycle; different forecasting methods; new models for volatility, correlation and of high-frequency financial data and new approaches to panel regression, as well as a number of case studies. Most of the contributions reflect the state-of-art on the respective subject. The book offers a valuable reference work for researchers, university instructors, practitioners, government officials and graduate and post-graduate students, as well as an important resource for advanced seminars in empirical economic and financial research.
This volume systematically details both the basic principles and new developments in Data Envelopment Analysis (DEA), offering a solid understanding of the methodology, its uses, and its potential. New material in this edition includes coverage of recent developments that have greatly extended the power and scope of DEA and have lead to new directions for research and DEA uses. Each chapter accompanies its developments with simple numerical examples and discussions of actual applications. The first nine chapters cover the basic principles of DEA, while the final seven chapters provide a more advanced treatment.
This textbook, now in its second edition, is an introduction to econometrics from the Bayesian viewpoint. It begins with an explanation of the basic ideas of subjective probability and shows how subjective probabilities must obey the usual rules of probability to ensure coherency. It then turns to the definitions of the likelihood function, prior distributions, and posterior distributions. It explains how posterior distributions are the basis for inference and explores their basic properties. The Bernoulli distribution is used as a simple example. Various methods of specifying prior distributions are considered, with special emphasis on subject-matter considerations and exchange ability. The regression model is examined to show how analytical methods may fail in the derivation of marginal posterior distributions, which leads to an explanation of classical and Markov chain Monte Carlo (MCMC) methods of simulation. The latter is proceeded by a brief introduction to Markov chains. The remainder of the book is concerned with applications of the theory to important models that are used in economics, political science, biostatistics, and other applied fields. New to the second edition is a chapter on semiparametric regression and new sections on the ordinal probit, item response, factor analysis, ARCH-GARCH, and stochastic volatility models. The new edition also emphasizes the R programming language, which has become the most widely used environment for Bayesian statistics.
This book presents a novel approach to time series econometrics, which studies the behavior of nonlinear stochastic processes. This approach allows for an arbitrary dependence structure in the increments and provides a generalization with respect to the standard linear independent increments assumption of classical time series models. The book offers a solution to the problem of a general semiparametric approach, which is given by a concept called C-convolution (convolution of dependent variables), and the corresponding theory of convolution-based copulas. Intended for econometrics and statistics scholars with a special interest in time series analysis and copula functions (or other nonparametric approaches), the book is also useful for doctoral students with a basic knowledge of copula functions wanting to learn about the latest research developments in the field.
This book discusses the problem of model choice when the statistical models are separate, also called nonnested. Chapter 1 provides an introduction, motivating examples and a general overview of the problem. Chapter 2 presents the classical or frequentist approach to the problem as well as several alternative procedures and their properties. Chapter 3 explores the Bayesian approach, the limitations of the classical Bayes factors and the proposed alternative Bayes factors to overcome these limitations. It also discusses a significance Bayesian procedure. Lastly, Chapter 4 examines the pure likelihood approach. Various real-data examples and computer simulations are provided throughout the text.
This book is an introduction to regression analysis, focusing on the practicalities of doing regression analysis on real-life data. Contrary to other textbooks on regression, this book is based on the idea that you do not necessarily need to know much about statistics and mathematics to get a firm grip on regression and perform it to perfection. This non-technical point of departure is complemented by practical examples of real-life data analysis using statistics software such as Stata, R and SPSS. Parts 1 and 2 of the book cover the basics, such as simple linear regression, multiple linear regression, how to interpret the output from statistics programs, significance testing and the key regression assumptions. Part 3 deals with how to practically handle violations of the classical linear regression assumptions, regression modeling for categorical y-variables and instrumental variable (IV) regression. Part 4 puts the various purposes of, or motivations for, regression into the wider context of writing a scholarly report and points to some extensions to related statistical techniques. This book is written primarily for those who need to do regression analysis in practice, and not only to understand how this method works in theory. The book's accessible approach is recommended for students from across the social sciences.
This book presents the reader with new operators and matrices that arise in the area of matrix calculus. The properties of these mathematical concepts are investigated and linked with zero-one matrices such as the commutation matrix. Elimination and duplication matrices are revisited and partitioned into submatrices. Studying the properties of these submatrices facilitates achieving new results for the original matrices themselves. Different concepts of matrix derivatives are presented and transformation principles linking these concepts are obtained. One of these concepts is used to derive new matrix calculus results, some involving the new operators and others the derivatives of the operators themselves. The last chapter contains applications of matrix calculus, including optimization, differentiation of log-likelihood functions, iterative interpretations of maximum likelihood estimators, and a Lagrangian multiplier test for endogeneity.
This textbook provides a coherent introduction to the main concepts and methods of one-parameter statistical inference. Intended for students of Mathematics taking their first course in Statistics, the focus is on Statistics for Mathematicians rather than on Mathematical Statistics. The goal is not to focus on the mathematical/theoretical aspects of the subject, but rather to provide an introduction to the subject tailored to the mindset and tastes of Mathematics students, who are sometimes turned off by the informal nature of Statistics courses. This book can be used as the basis for an elementary semester-long first course on Statistics with a firm sense of direction that does not sacrifice rigor. The deeper goal of the text is to attract the attention of promising Mathematics students.
With a new author team contributing decades of practical experience, this fully updated and thoroughly classroom-tested second edition textbook prepares students and practitioners to create effective forecasting models and master the techniques of time series analysis. Taking a practical and example-driven approach, this textbook summarises the most critical decisions, techniques and steps involved in creating forecasting models for business and economics. Students are led through the process with an entirely new set of carefully developed theoretical and practical exercises. Chapters examine the key features of economic time series, univariate time series analysis, trends, seasonality, aberrant observations, conditional heteroskedasticity and ARCH models, non-linearity and multivariate time series, making this a complete practical guide. A companion website with downloadable datasets, exercises and lecture slides rounds out the full learning package.
This book assesses how efficient primary and upper primary education is across different states of India considering both output oriented and input oriented measures of technical efficiency. It identifies the most important factors that could produce differential efficiency among the states, including the effects of central grants, school-specific infrastructures, social indicators and policy variables, as well as state-specific factors like per-capita net-state-domestic-product from the service sector, inequality in distribution of income (Gini coefficient), the percentage of people living below the poverty line and the density of population. The study covers the period 2005-06 to 2010-11 and all the states and union territories of India, which are categorized into two separate groups, namely: (i) General Category States (GCS); and (ii) Special Category States (SCS) and Union Territories (UT). It uses non-parametric Data Envelopment Analysis (DEA) and obtains the Technology Closeness Ratio (TCR), measuring whether the maximum output producible from an input bundle by a school within a given group is as high as what could be produced if the school could choose to join the other group. The major departure of this book is its approach to estimating technical efficiency (TE), which does not use a single frontier encompassing all the states and UT, as is done in the available literature. Rather, this method assumes that GCS, SCS and UT are not homogeneous and operate under different fiscal and economic conditions.
Focusing on what actuaries need in practice, this introductory account provides readers with essential tools for handling complex problems and explains how simulation models can be created, used and re-used (with modifications) in related situations. The book begins by outlining the basic tools of modelling and simulation, including a discussion of the Monte Carlo method and its use. Part II deals with general insurance and Part III with life insurance and financial risk. Algorithms that can be implemented on any programming platform are spread throughout and a program library written in R is included. Numerous figures and experiments with R-code illustrate the text. The author's non-technical approach is ideal for graduate students, the only prerequisites being introductory courses in calculus and linear algebra, probability and statistics. The book will also be of value to actuaries and other analysts in the industry looking to update their skills.
This book is a comprehensive introduction of the reader into the simulation and modelling techniques and their application in the management of organisations. The book is rooted in the thorough understanding of systems theory applied to organisations and focuses on how this theory can apply to econometric models used in the management of organisations. The econometric models in this book employ linear and dynamic programming, graph theory, queuing theory, game theory, etc. and are presented and analysed in various fields of application, such as investment management, stock management, strategic decision making, management of production costs and the lifecycle costs of quality and non-quality products, production quality Management, etc.
The case-control approach is a powerful method for investigating factors that may explain a particular event. It is extensively used in epidemiology to study disease incidence, one of the best-known examples being Bradford Hill and Doll's investigation of the possible connection between cigarette smoking and lung cancer. More recently, case-control studies have been increasingly used in other fields, including sociology and econometrics. With a particular focus on statistical analysis, this book is ideal for applied and theoretical statisticians wanting an up-to-date introduction to the field. It covers the fundamentals of case-control study design and analysis as well as more recent developments, including two-stage studies, case-only studies and methods for case-control sampling in time. The latter have important applications in large prospective cohorts which require case-control sampling designs to make efficient use of resources. More theoretical background is provided in an appendix for those new to the field.
This edited book contains several state-of-the-art papers devoted to econometrics of risk. Some papers provide theoretical analysis of the corresponding mathematical, statistical, computational, and economical models. Other papers describe applications of the novel risk-related econometric techniques to real-life economic situations. The book presents new methods developed just recently, in particular, methods using non-Gaussian heavy-tailed distributions, methods using non-Gaussian copulas to properly take into account dependence between different quantities, methods taking into account imprecise ("fuzzy") expert knowledge, and many other innovative techniques. This versatile volume helps practitioners to learn how to apply new techniques of econometrics of risk, and researchers to further improve the existing models and to come up with new ideas on how to best take into account economic risks.
Economic Modeling Using Artificial Intelligence Methods examines the application of artificial intelligence methods to model economic data. Traditionally, economic modeling has been modeled in the linear domain where the principles of superposition are valid. The application of artificial intelligence for economic modeling allows for a flexible multi-order non-linear modeling. In addition, game theory has largely been applied in economic modeling. However, the inherent limitation of game theory when dealing with many player games encourages the use of multi-agent systems for modeling economic phenomena. The artificial intelligence techniques used to model economic data include: multi-layer perceptron neural networks radial basis functions support vector machines rough sets genetic algorithm particle swarm optimization simulated annealing multi-agent system incremental learning fuzzy networks Signal processing techniques are explored to analyze economic data, and these techniques are the time domain methods, time-frequency domain methods and fractals dimension approaches. Interesting economic problems such as causality versus correlation, simulating the stock market, modeling and controling inflation, option pricing, modeling economic growth as well as portfolio optimization are examined. The relationship between economic dependency and interstate conflict is explored, and knowledge on how economics is useful to foster peace - and vice versa - is investigated. Economic Modeling Using Artificial Intelligence Methods deals with the issue of causality in the non-linear domain and applies the automatic relevance determination, the evidence framework, Bayesian approach and Granger causality to understand causality and correlation. Economic Modeling Using Artificial Intelligence Methods makes an important contribution to the area of econometrics, and is a valuable source of reference for graduate students, researchers and financial practitioners.
Though globalisation of the world economy is currently a powerful force, people’s international mobility appears to still be very limited. The goal of this book is to improve our knowledge of the true effects of migration flows. It includes contributions by prominent academic researchers analysing the socio-economic impact of migration in a variety of contexts: interconnection of people and trade flows, causes and consequences of capital remittances, understanding the macroeconomic impact of migration and the labour market effects of people’s flows. The latest analytical methodologies are employed in all chapters, while interesting policy guidelines emerge from the investigations. The style of the volume makes it accessible for both non-experts and advanced readers interested in this hot topic of today’s world. |
You may like...
Advances in Knowledge Discovery and…
Fabrice Guillet, Bruno Pinaud, …
Hardcover
R3,402
Discovery Miles 34 020
The Data and Analytics Playbook - Proven…
Lowell Fryman, Gregory Lampshire, …
Paperback
R1,200
Discovery Miles 12 000
Linking and Mining Heterogeneous and…
Deepak P, Anna Jurek-Loughrey
Hardcover
R3,356
Discovery Miles 33 560
|