Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Business & Economics > Economics > Econometrics
Professionals are constantly searching for competitive solutions to help determine current and future economic tendencies. Econometrics uses statistical methods and real-world data to predict and establish specific trends within business and finance. This analytical method sustains limitless potential, but the necessary research for professionals to understand and implement this approach is lacking. Applied Econometric Analysis: Emerging Research and Opportunities explores the theoretical and practical aspects of detailed econometric theories and applications within economics, political science, public policy, business, and finance. Featuring coverage on a broad range of topics such as cointegration, machine learning, and time series analysis, this book is ideally designed for economists, policymakers, financial analysts, marketers, researchers, academicians, and graduate students seeking research on the various techniques of econometric concepts.
This book explores the potential for renewable energy development and the adoption of sustainable production processes in Latin America and the Caribbean. By examining the energy transition process, the impact of environmental degradation, and the relationship between renewable energy sources and economic growth, the effects of increased globalisation and liberalisation in this part of the world are analysed. Particular attention is given to renewable energy investment, the energy-economics growth nexus, the impact of trade openness, and the mitigation of carbon emissions. This book aims to highlight econometric techniques that can be used to tackle issues relating to globalisation, the energy transition, and environmental degradation. It will be relevant to researchers and policymakers interested in energy and environmental economics.
The rich, multi-faceted and multi-disciplinary field of matching-based market design is an active and important one due to its highly successful applications with economic and sociological impact. Its home is economics, but with intimate connections to algorithm design and operations research. With chapters contributed by over fifty top researchers from all three disciplines, this volume is unique in its breadth and depth, while still being a cohesive and unified picture of the field, suitable for the uninitiated as well as the expert. It explains the dominant ideas from computer science and economics underlying the most important results on market design and introduces the main algorithmic questions and combinatorial structures. Methodologies and applications from both the pre-Internet and post-Internet eras are covered in detail. Key chapters discuss the basic notions of efficiency, fairness and incentives, and the way market design seeks solutions guided by normative criteria borrowed from social choice theory.
Gini's mean difference (GMD) was first introduced by Corrado Gini in 1912 as an alternative measure of variability. GMD and the parameters which are derived from it (such as the Gini coefficient or the concentration ratio) have been in use in the area of income distribution for almost a century. In practice, the use of GMD as a measure of variability is justified whenever the investigator is not ready to impose, without questioning, the convenient world of normality. This makes the GMD of critical importance in the complex research of statisticians, economists, econometricians, and policy makers. This book focuses on imitating analyses that are based on variance by replacing variance with the GMD and its variants. In this way, the text showcases how almost everything that can be done with the variance as a measure of variability, can be replicated by using Gini. Beyond this, there are marked benefits to utilizing Gini as opposed to other methods. One of the advantages of using Gini methodology is that it provides a unified system that enables the user to learn about various aspects of the underlying distribution. It also provides a systematic method and a unified terminology. Using Gini methodology can reduce the risk of imposing assumptions that are not supported by the data on the model. With these benefits in mind the text uses the covariance-based approach, though applications to other approaches are mentioned as well.
This textbook presents methods and techniques for time series analysis and forecasting and shows how to use Python to implement them and solve data science problems. It covers not only common statistical approaches and time series models, including ARMA, SARIMA, VAR, GARCH and state space and Markov switching models for (non)stationary, multivariate and financial time series, but also modern machine learning procedures and challenges for time series forecasting. Providing an organic combination of the principles of time series analysis and Python programming, it enables the reader to study methods and techniques and practice writing and running Python code at the same time. Its data-driven approach to analyzing and modeling time series data helps new learners to visualize and interpret both the raw data and its computed results. Primarily intended for students of statistics, economics and data science with an undergraduate knowledge of probability and statistics, the book will equally appeal to industry professionals in the fields of artificial intelligence and data science, and anyone interested in using Python to solve time series problems.
Economists are regularly confronted with results of quantitative economics research. Econometrics: Theory and Applications with EViews provides a broad introduction to quantitative economic methods, for example how models arise, their underlying assumptions and how estimates of parameters or other economic quantities are computed. The author combines econometric theory with practice by demonstrating its use with the software package EViews through extensive use of screen shots. The emphasis is on understanding how to select the right method of analysis for a given situation, and how to actually apply the theoretical methodology correctly. The EViews software package is available from 'Quantitive Micro Software'. Written for any undergraduate or postgraduate course in Econometrics.
In 1945, very early in the history of the development of a rigorous analytical theory of probability, Feller (1945) wrote a paper called "The fundamental limit theorems in probability" in which he set out what he considered to be "the two most important limit theorems in the modern theory of probability: the central limit theorem and the recently discovered ... 'Kolmogoroff's cel ebrated law of the iterated logarithm' ." A little later in the article he added to these, via a charming description, the "little brother (of the central limit theo rem), the weak law of large numbers," and also the strong law of large num bers, which he considers as a close relative of the law of the iterated logarithm. Feller might well have added to these also the beautiful and highly applicable results of renewal theory, which at the time he himself together with eminent colleagues were vigorously producing. Feller's introductory remarks include the visionary: "The history of probability shows that our problems must be treated in their greatest generality: only in this way can we hope to discover the most natural tools and to open channels for new progress. This remark leads naturally to that characteristic of our theory which makes it attractive beyond its importance for various applications: a combination of an amazing generality with algebraic precision."
Louis Phlips The stabilisation of primary commodity prices, and the related issue of the stabilisation of export earnings of developing countries, have traditionally been studied without reference to the futures markets (that exist or could exist) for these commodities. These futures markets have in turn been s udied in isolation. The same is true for the new developments on financial markets. Over the last few years, in particular sine the 1985 tin crisis and the October 1987 stock exchange crisis, it has become evident that there are inter actions between commodity, futures, and financial markets and that these inter actions are very important. The more so as trade on futures and financial markets has shown a spectacular increase. This volume brings together a number of recent and unpublished papers on these interactions by leading specialists (and their students). A first set of papers examines how the use of futures markets could help stabilising export earnings of developing countries and how this compares to the rather unsuccessful UNCTAD type interventions via buffer stocks, pegged prices and cartels. A second set of papers faces the fact, largely ignored in the literature, that commodity prices are determined in foreign currencies, with the result that developing countries suffer from the volatility of exchange rates of these currencies (even in cases where commodity prices are relatively stable). Financial markets are thus explicitly linked to futures and commodity markets."
This book focuses on quantitative survey methodology, data collection and cleaning methods. Providing starting tools for using and analyzing a file once a survey has been conducted, it addresses fields as diverse as advanced weighting, editing, and imputation, which are not well-covered in corresponding survey books. Moreover, it presents numerous empirical examples from the author's extensive research experience, particularly real data sets from multinational surveys.
Teaches the principles of sampling with examples from social sciences, public opinion research, public health, business, agriculture, and ecology. Has been thoroughly revised to incorporate recent research and applications. Includes a new chapter on nonprobability samples, and more than 200 new examples and exercises have been added.
New Directions in Computational Economics brings together for the first time a diverse selection of papers, sharing the underlying theme of application of computing technology as a tool for achieving solutions to realistic problems in computational economics and related areas in the environmental, ecological and energy fields. Part I of the volume addresses experimental and computational issues in auction mechanisms, including a survey of recent results for sealed bid auctions. The second contribution uses neural networks as the basis for estimating bid functions for first price sealed bid auctions. Also presented is the smart market' computational mechanism which better matches bids and offers for natural gas. Part II consists of papers that formulate and solve models of economics systems. Amman and Kendrick's paper deals with control models and the computational difficulties that result from nonconvexities. Using goal programming, Nagurney, Thore and Pan formulate spatial resource allocation models to analyze various policy issues. Thompson and Thrall next present a rigorous mathematical analysis of the relationship between efficiency and profitability. The problem of matching uncertain streams of assets and liabilities is solved using stochastic optimization techniques in the following paper in this section. Finally, Part III applies economic concepts to issues in computer science in addition to using computational techniques to solve economic models.
This volume collects authoritative contributions on analytical methods and mathematical statistics. The methods presented include resampling techniques; the minimization of divergence; estimation theory and regression, eventually under shape or other constraints or long memory; and iterative approximations when the optimal solution is difficult to achieve. It also investigates probability distributions with respect to their stability, heavy-tailness, Fisher information and other aspects, both asymptotically and non-asymptotically. The book not only presents the latest mathematical and statistical methods and their extensions, but also offers solutions to real-world problems including option pricing. The selected, peer-reviewed contributions were originally presented at the workshop on Analytical Methods in Statistics, AMISTAT 2015, held in Prague, Czech Republic, November 10-13, 2015.
World-renowned experts in spatial statistics and spatial econometrics present the latest advances in specification and estimation of spatial econometric models. This includes information on the development of tools and software, and various applications. The text introduces new tests and estimators for spatial regression models, including discrete choice and simultaneous equation models. The performance of techniques is demonstrated through simulation results and a wide array of applications related to economic growth, international trade, knowledge externalities, population-employment dynamics, urban crime, land use, and environmental issues. An exciting new text for academics with a theoretical interest in spatial statistics and econometrics, and for practitioners looking for modern and up-to-date techniques.
This introductory overview explores the methods, models and interdisciplinary links of artificial economics, a new way of doing economics in which the interactions of artificial economic agents are computationally simulated to study their individual and group behavior patterns. Conceptually and intuitively, and with simple examples, Mercado addresses the differences between the basic assumptions and methods of artificial economics and those of mainstream economics. He goes on to explore various disciplines from which the concepts and methods of artificial economics originate; for example cognitive science, neuroscience, artificial intelligence, evolutionary science and complexity science. Introductory discussions on several controversial issues are offered, such as the application of the concepts of evolution and complexity in economics and the relationship between artificial intelligence and the philosophies of mind. This is one of the first books to fully address artificial economics, emphasizing its interdisciplinary links and presenting in a balanced way its occasionally controversial aspects.
Elementary Bayesian Statistics is a thorough and easily accessible introduction to the theory and practical application of Bayesian statistics. It presents methods to assist in the collection, summary and presentation of numerical data.Bayesian statistics are becoming an increasingly important and more frequently used method for analysing statistical data. The author defines concepts and methods with a variety of examples and uses a stage-by-stage approach to coach the reader through the applied examples. Also included are a wide range of problems to challenge the reader and the book makes extensive use of Minitab to apply computational techniques to statistical problems. Issues covered include probability, Bayes's Theorem and categorical states, frequency, the Bernoulli process and Poisson process, estimation, testing hypotheses and the normal process with known parameters and uncertain parameters. Elementary Bayesian Statistics will be an essential resource for students as a supplementary text in traditional statistics courses. It will also be welcomed by academics, researchers and econometricians wishing to know more about Bayesian statistics.
Economic Phenomena before and after War is the result of the author's search for a scientific explanation of modern wars, by means of economic statistical data, in the statistics of consumption, production and natural growth of population. The theory discussed assumes that a state of war in modern communities is dependent on the general economic equilibrium, which becomes more and more unstable as industrialization progresses. A state of war indicates a turning point in the action of balancing forces; it moves the economic forces in an opposite direction and is therefore a means for stabilizing the general economic equilibrium.
Applications of queueing network models have multiplied in the last generation, including scheduling of large manufacturing systems, control of patient flow in health systems, load balancing in cloud computing, and matching in ride sharing. These problems are too large and complex for exact solution, but their scale allows approximation. This book is the first comprehensive treatment of fluid scaling, diffusion scaling, and many-server scaling in a single text presented at a level suitable for graduate students. Fluid scaling is used to verify stability, in particular treating max weight policies, and to study optimal control of transient queueing networks. Diffusion scaling is used to control systems in balanced heavy traffic, by solving for optimal scheduling, admission control, and routing in Brownian networks. Many-server scaling is studied in the quality and efficiency driven Halfin-Whitt regime and applied to load balancing in the supermarket model and to bipartite matching in ride-sharing applications.
Economists can use computer algebra systems to manipulate symbolic models, derive numerical computations, and analyze empirical relationships among variables. Maxima is an open-source multi-platform computer algebra system that rivals proprietary software. Maxima's symbolic and computational capabilities enable economists and financial analysts to develop a deeper understanding of models by allowing them to explore the implications of differences in parameter values, providing numerical solutions to problems that would be otherwise intractable, and by providing graphical representations that can guide analysis. This book provides a step-by-step tutorial for using this program to examine the economic relationships that form the core of microeconomics in a way that complements traditional modeling techniques. Readers learn how to phrase the relevant analysis and how symbolic expressions, numerical computations, and graphical representations can be used to learn from microeconomic models. In particular, comparative statics analysis is facilitated. Little has been published on Maxima and its applications in economics and finance, and this volume will appeal to advanced undergraduates, graduate-level students studying microeconomics, academic researchers in economics and finance, economists, and financial analysts.
The volume examines the state-of-the-art of productivity and efficiency analysis. It brings together a selection of the best papers from the 10th North American Productivity Workshop. By analyzing world-wide perspectives on challenges that local economies and institutions may face when changes in productivity are observed, readers can quickly assess the impact of productivity measurement, productivity growth, dynamics of productivity change, measures of labor productivity, measures of technical efficiency in different sectors, frontier analysis, measures of performance, industry instability and spillover effects. The contributions in this volume focus on the theory and application of economics, econometrics, statistics, management science and operational research related to problems in the areas of productivity and efficiency measurement. Popular techniques and methodologies including stochastic frontier analysis and data envelopment analysis are represented. Chapters also cover broader issues related to measuring, understanding, incentivizing and improving the productivity and performance of firms, public services, and industries.
This book aims to help the reader better understand the importance of data analysis in project management. Moreover, it provides guidance by showing tools, methods, techniques and lessons learned on how to better utilize the data gathered from the projects. First and foremost, insight into the bridge between data analytics and project management aids practitioners looking for ways to maximize the practical value of data procured. The book equips organizations with the know-how necessary to adapt to a changing workplace dynamic through key lessons learned from past ventures. The book's integrated approach to investigating both fields enhances the value of research findings.
How might one determine if a financial institution is taking risk in a balanced and productive manner? A powerful tool to address this question is economic capital, which is a model-based measure of the amount of equity that an entity must hold to satisfactorily offset its risk-generating activities. This book, with a particular focus on the credit-risk dimension, pragmatically explores real-world economic-capital methodologies and applications. It begins with the thorny practical issues surrounding the construction of an (industrial-strength) credit-risk economic-capital model, defensibly determining its parameters, and ensuring its efficient implementation. It then broadens its gaze to examine various critical applications and extensions of economic capital; these include loan pricing, the computation of loan impairments, and stress testing. Along the way, typically working from first principles, various possible modelling choices and related concepts are examined. The end result is a useful reference for students and practitioners wishing to learn more about a centrally important financial-management device.
Features content that has been used extensively in a university setting, allowing the reader to benefit from tried and tested methods, practices, and knowledge. In contrast to existing books on the market, it details the specialized packages that have been developed over the past decade, and focuses on pulling real-time data directly from free data sources on the internet. It achieves its goal by providing a large number of examples in hot topics such as machine learning. Assumes no prior knowledge of R, allowing it to be useful to a range of people from undergraduates to professionals. Comprehensive explanations make the reader proficient in a multitude of advanced methods, and provides overviews of many different resources that will be useful to the readers.
Features Self-contained book suitable for graduate students and post-doctoral fellows in financial mathematics and data science, as well as for practitioners working in the financial industry who deal with big data All results are presented visually to aid in understanding of concepts.
Features Accessible to readers with a basic background in probability and statistics Covers fundamental concepts of experimental design and cause-effect relationships Introduces classical ANOVA models, including contrasts and multiple testing Provides an example-based introduction to mixed models Features basic concepts of split-plot and incomplete block designs R code available for all steps Supplementary website with additional resources and updates
Anyone who wants to understand stock market cycles and develop a focused, thoughtful, and solidly grounded valuation approach to the stock market must read this book. Bolten explains the causes and patterns of the cycles and identifies the causes of stock price changes. He identifies the sources of risks in the stock market and in individual stocks. Also covered is how the interaction of expected return and risk creates stock market cycles. Bolten talks about the industry sectors most likely to be profitable investments in each stage of the stock market cycles, while identifying the stock market bubble and sinkhole warning signs. The role of the Federal Reserve in each stage of the stock market cycle is also discussed. All the categories of risk are identified and explained while no specific risk is left undiscussed. The underlying causes for long-term stock price trends and cycles are highlighted. The book is useful in many areas including stock analysis, portfolio management, cost of equity capital, financing strategies, business valuations and spotting profit opportunities caused by general economic and specific company changes. |
You may like...
Operations and Supply Chain Management
James Evans, David Collier
Hardcover
Operations And Supply Chain Management
David Collier, James Evans
Hardcover
Introductory Econometrics - A Modern…
Jeffrey Wooldridge
Hardcover
Financial and Macroeconomic…
Francis X. Diebold, Kamil Yilmaz
Hardcover
R3,524
Discovery Miles 35 240
|