![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics
Recent economic history suggests that a key element in economic growth and development for many countries has been an aggressive export policy and a complementary import policy. Such policies can be very effective provided that resources are used wisely to encourage exports from industries that can be com petitive in the international arena. Also, import protection must be used carefully so that it encourages infant industries instead of providing rents to industries that are not competitive. Policy makers may use a variety of methods of analysis in planning trade policy. As computing power has grown in recent years increasing attention has been give to economic models as one of the most powerful aids to policy making. These models can be used on the one hand to help in selecting export industries to encourage and infant industries to protect and on the other hand to chart the larger effects ofttade policy on the entire economy. While many models have been developed in recent years there has not been any analysis of the strengths and weaknesses of the various types of models. Therefore, this monograph provides a review and analysis of the models which can be used to analyze dynamic comparative advantage."
Parallel Algorithms for Linear Models provides a complete and detailed account of the design, analysis and implementation of parallel algorithms for solving large-scale linear models. It investigates and presents efficient, numerically stable algorithms for computing the least-squares estimators and other quantities of interest on massively parallel systems. The monograph is in two parts. The first part consists of four chapters and deals with the computational aspects for solving linear models that have applicability in diverse areas. The remaining two chapters form the second part, which concentrates on numerical and computational methods for solving various problems associated with seemingly unrelated regression equations (SURE) and simultaneous equations models. The practical issues of the parallel algorithms and the theoretical aspects of the numerical methods will be of interest to a broad range of researchers working in the areas of numerical and computational methods in statistics and econometrics, parallel numerical algorithms, parallel computing and numerical linear algebra. The aim of this monograph is to promote research in the interface of econometrics, computational statistics, numerical linear algebra and parallelism.
In recent years there has been a growing interest in and concern for the development of a sound spatial statistical body of theory. This work has been undertaken by geographers, statisticians, regional scientists, econometricians, and others (e. g., sociologists). It has led to the publication of a number of books, including Cliff and Ord's Spatial Processes (1981), Bartlett's The Statistical Analysis of Spatial Pattern (1975), Ripley's Spatial Statistics (1981), Paelinck and Klaassen's Spatial Economet ics (1979), Ahuja and Schachter's Pattern Models (1983), and Upton and Fingleton's Spatial Data Analysis by Example (1985). The first of these books presents a useful introduction to the topic of spatial autocorrelation, focusing on autocorrelation indices and their sampling distributions. The second of these books is quite brief, but nevertheless furnishes an eloquent introduction to the rela tionship between spatial autoregressive and two-dimensional spectral models. Ripley's book virtually ignores autoregressive and trend surface modelling, and focuses almost solely on point pattern analysis. Paelinck and Klaassen's book closely follows an econometric textbook format, and as a result overlooks much of the important material necessary for successful spatial data analy sis. It almost exclusively addresses distance and gravity models, with some treatment of autoregressive modelling. Pattern Models supplements Cliff and Ord's book, which in combination provide a good introduction to spatial data analysis. Its basic limitation is a preoccupation with the geometry of planar patterns, and hence is very narrow in scope."
This book focuses on discussing the issues of rating scheme design and risk aggregation of risk matrix, which is a popular risk assessment tool in many fields. Although risk matrix is usually treated as qualitative tool, this book conducts the analysis from the quantitative perspective. The discussed content belongs to the scope of risk management, and to be more specific, it is related to quick risk assessment. This book is suitable for the researchers and practitioners related to qualitative or quick risk assessment and highly helps readers understanding how to design more convincing risk assessment tools and do more accurate risk assessment in a uncertain context.
This book presents the effects of integrating information and communication technologies (ICT) and economic processes in macroeconomic dynamics, finance, marketing, industrial policies, and in government economic strategy. The text explores modeling and applications in these fields and also describes, in a clear and accessible manner, the theories that guide the integration among information technology (IT), telecommunications, and the economy, while presenting examples of their applications. Current trends such as artificial intelligence, machine learning, and big data technologies used in economics are also included. This volume is suitable for researchers, practitioners, and students working in economic theory and the computational social sciences.
'Fascinating . . . timely' Daily Mail 'Refreshingly clear and engaging' Tim Harford 'Delightful . . . full of unique insights' Prof Sir David Spiegelhalter There's no getting away from statistics. We encounter them every day. We are all users of statistics whether we like it or not. Do missed appointments really cost the NHS GBP1bn per year? What's the difference between the mean gender pay gap and the median gender pay gap? How can we work out if a claim that we use 42 billion single-use plastic straws per year in the UK is accurate? What did the Vote Leave campaign's GBP350m bus really mean? How can we tell if the headline 'Public pensions cost you GBP4,000 a year' is correct? Does snow really cost the UK economy GBP1bn per day? But how do we distinguish statistical fact from fiction? What can we do to decide whether a number, claim or news story is accurate? Without an understanding of data, we cannot truly understand what is going on in the world around us. Written by Anthony Reuben, the BBC's first head of statistics, Statistical is an accessible and empowering guide to challenging the numbers all around us.
The first edition of this book has been described as a landmark book, being the first of its kind in applied econometrics. This second edition is thoroughly revised and updated and explains how to use many recent technical developments in time series econometrics. The main objective of the book is to help many applied economists, with a limited background in econometric estimation theory, to understand and apply widely used time eseries econometric techniques.
Panel data is a data type increasingly used in research in economics, social sciences, and medicine. Its primary characteristic is that the data variation goes jointly over space (across individuals, firms, countries, etc.) and time (over years, months, etc.). Panel data allow examination of problems that cannot be handled by cross-section data or time-series data. Panel data analysis is a core field in modern econometrics and multivariate statistics, and studies based on such data occupy a growing part of the field in many other disciplines. The book is intended as a text for master and advanced undergraduate courses. It may also be useful for PhD-students writing theses in empirical and applied economics and readers conducting empirical work on their own. The book attempts to take the reader gradually from simple models and methods in scalar (simple vector) notation to more complex models in matrix notation. A distinctive feature is that more attention is given to unbalanced panel data, the measurement error problem, random coefficient approaches, the interface between panel data and aggregation, and the interface between unbalanced panels and truncated and censored data sets. The 12 chapters are intended to be largely self-contained, although there is also natural progression. Most of the chapters contain commented examples based on genuine data, mainly taken from panel data applications to economics. Although the book, inter alia, through its use of examples, is aimed primarily at students of economics and econometrics, it may also be useful for readers in social sciences, psychology, and medicine, provided they have a sufficient background in statistics, notably basic regression analysis and elementary linear algebra.
Franz Ferschl is seventy. According to his birth certificate it is true, but it is unbelievable. Two of the three editors remembers very well the Golden Age of Operations Research at Bonn when Franz Ferschl worked together with Wilhelm Krelle, Martin Beckmann and Horst Albach. The importance of this fruitful cooperation is reflected by the fact that half of the contributors to this book were strongly influenced by Franz Ferschl and his colleagues at the University of Bonn. Clearly, Franz Ferschl left his traces at all the other places of his professional activities, in Vienna and Munich. This is demonstrated by the present volume as well. Born in 1929 in the Upper-Austrian Miihlviertel, his scientific education brought him to Vienna where he studied mathematics. In his early years he was attracted by Statistics and Operations Research. During his employment at the Osterreichische Bundeskammer fUr Gewerbliche Wirtschaft in Vienna he prepared his famous book on queueing theory and stochastic processes in economics. This work has been achieved during his scarce time left by his duties at the Bundeskammer, mostly between 6 a.m. and midnight. All those troubles were, however, soon rewarded by the chair of statistics at Bonn University. As a real Austrian, the amenities of the Rhineland could not prevent him from returning to Vienna, where he took the chair of statistics.
This book provides the ultimate goal of economic studies to predict how the economy develops-and what will happen if we implement different policies. To be able to do that, we need to have a good understanding of what causes what in economics. Prediction and causality in economics are the main topics of this book's chapters; they use both more traditional and more innovative techniques-including quantum ideas -- to make predictions about the world economy (international trade, exchange rates), about a country's economy (gross domestic product, stock index, inflation rate), and about individual enterprises, banks, and micro-finance institutions: their future performance (including the risk of bankruptcy), their stock prices, and their liquidity. Several papers study how COVID-19 has influenced the world economy. This book helps practitioners and researchers to learn more about prediction and causality in economics -- and to further develop this important research direction.
This title is a Pearson Global Edition. The Editorial team at Pearson has worked closely with educators around the world to include content which is especially relevant to students outside the United States. This package includes MyLab. For courses in Business Statistics. A classic text for accuracy and statistical precision Statistics for Business and Economics enables students to conduct serious analysis of applied problems rather than running simple "canned" applications. This text is also at a mathematically higher level than most business statistics texts and provides students with the knowledge they need to become stronger analysts for future managerial positions. In this regard, it emphasizes an understanding of the assumptions that are necessary for professional analysis. In particular, it has greatly expanded the number of applications that utilize data from applied policy and research settings. The Ninth Edition of this book has been revised and updated to provide students with improved problem contexts for learning how statistical methods can improve their analysis and understanding of business and economics. This revision recognizes the globalization of statistical study and in particular the global market for this book. Reach every student by pairing this text with MyLab Statistics MyLab (TM) is the teaching and learning platform that empowers you to reach every student. By combining trusted author content with digital tools and a flexible platform, MyLab personalizes the learning experience and improves results for each student. MyLab Statistics should only be purchased when required by an instructor. Please be sure you have the correct ISBN and Course ID. Instructors, contact your Pearson representative for more information.
Through analysis of the European Union Emissions Trading Scheme (EU ETS) and the Clean Development Mechanism (CDM), this book demonstrates how to use a variety of econometric techniques to analyze the evolving and expanding carbon markets sphere, techniques that can be extrapolated to the worldwide marketplace. It features stylized facts about carbon markets from an economics perspective, as well as covering key aspects of pricing strategies, risk and portfolio management.
The research and its outcomes presented here focus on spatial sampling of agricultural resources. The authors introduce sampling designs and methods for producing accurate estimates of crop production for harvests across different regions and countries. With the help of real and simulated examples performed with the open-source software R, readers will learn about the different phases of spatial data collection. The agricultural data analyzed in this book help policymakers and market stakeholders to monitor the production of agricultural goods and its effects on environment and food safety.
Thorough presentation of the problem of portfolio optimization, leading in a natural way to the Capital Market Theory Dynamic programming and the optimal portfolio selection-consumption problem through time An intuitive approach to Brownian motion and stochastic integral models for continuous time problems The Black-Scholes equation for simple European option values, derived in several different ways A chapter on several types of exotic options and one on material on the management of risk in several contexts
"Game Theory for Economists" introduces economists to the game-theoretic approach of modelling economic behaviour and interaction, focusing on concepts and ideas from the vast field of game-theoretic models which find commonly used applications in economics. This careful selection of topics allows the reader to concentrate on the parts of the game which are the most relevant for the economist who does not want to become a specialist. Written at a level appropriate for a student or researcher with a solid microeconomic background, the book should provide the reader with skills necessary to formalize economic games and to make them accessible for game theoretic analysis. It offers a concise introduction to game theory which provides economists with the techniques and results necessary to follow the literature in economic theory; helps the reader formalize economic problems; and, concentrates on equilibrium concepts that are most commonly used in economics.
In the Administration building at Linkopi ] ng University we have one of Oscar Reutersvard' ] s "Impossible Figures" in three dimensions. I call it "Perspectives of Science." When viewed from a speci c point in space there is order and structure in the 3-dimensional gure. When viewed from other points there is disorder and no structure. If a speci c scienti c paradigm is used, there is order and structure; otherwise there is disorder and no structure. My perspective in Transportation Science has focused on understanding the mathematical structure and the logic underlying the choice probability models in common use. My book with N. F. Stewart on the Gravity model (Erlander and Stewart 1990), was written in this perspective. The present book stems from the same desire to understand underlying assumptions and structure. It investigateshow far a new way of de ning Cost-Minimizing Behavior can take us.Itturnsoutthatall commonlyusedchoiceprobabilitydistributionsoflogittype- log linear probability functions - follow from cost-minimizing behavior de ned in the new way. In addition some new nested models appear." |
You may like...
The Oxford Handbook of the Economics of…
Yann Bramoulle, Andrea Galeotti, …
Hardcover
R5,455
Discovery Miles 54 550
Design and Analysis of Time Series…
Richard McCleary, David McDowall, …
Hardcover
R3,286
Discovery Miles 32 860
Introductory Econometrics - A Modern…
Jeffrey Wooldridge
Hardcover
Introduction to Computational Economics…
Hans Fehr, Fabian Kindermann
Hardcover
R4,258
Discovery Miles 42 580
Agent-Based Modeling and Network…
Akira Namatame, Shu-Heng Chen
Hardcover
R2,970
Discovery Miles 29 700
Handbook of Research Methods and…
Nigar Hashimzade, Michael A. Thornton
Hardcover
R8,882
Discovery Miles 88 820
|