![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics
Originally published in 1951, this volume reprints the classic work
written by one of the leading global econometricians.
Today, most money is credit money, created by commercial banks. While credit can finance innovation, excessive credit can lead to boom/bust cycles, such as the recent financial crisis. This highlights how the organization of our monetary system is crucial to stability. One way to achieve this is by separating the unit of account from the medium of exchange and in pre-modern Europe, such a separation existed. This new volume examines this idea of monetary separation and this history of monetary arrangements in the North and Baltic Seas region, from the Hanseatic League onwards. This book provides a theoretical analysis of four historical cases in the Baltic and North Seas region, with a view to examining evolution of monetary arrangements from a new monetary economics perspective. Since the objective exhange value of money (its purchasing power), reflects subjective individual valuations of commodities, the author assesses these historical cases by means of exchange rates. Using theories from new monetary economics , the book explores how the units of account and their media of exchange evolved as social conventions, and offers new insight into the separation between the two. Through this exploration, it puts forward that money is a social institution, a clearing device for the settlement of accounts, and so the value of money, or a separate unit of account, ultimately results from the size of its network of users. The History of Money and Monetary Arrangements offers a highly original new insight into monetary arrangments as an evolutionary process. It will be of great interest to an international audience of scholars and students, including those with an interest in economic history, evolutionary economics and new monetary economics.
Machine learning (ML) is progressively reshaping the fields of quantitative finance and algorithmic trading. ML tools are increasingly adopted by hedge funds and asset managers, notably for alpha signal generation and stocks selection. The technicality of the subject can make it hard for non-specialists to join the bandwagon, as the jargon and coding requirements may seem out of reach. Machine Learning for Factor Investing: R Version bridges this gap. It provides a comprehensive tour of modern ML-based investment strategies that rely on firm characteristics. The book covers a wide array of subjects which range from economic rationales to rigorous portfolio back-testing and encompass both data processing and model interpretability. Common supervised learning algorithms such as tree models and neural networks are explained in the context of style investing and the reader can also dig into more complex techniques like autoencoder asset returns, Bayesian additive trees, and causal models. All topics are illustrated with self-contained R code samples and snippets that are applied to a large public dataset that contains over 90 predictors. The material, along with the content of the book, is available online so that readers can reproduce and enhance the examples at their convenience. If you have even a basic knowledge of quantitative finance, this combination of theoretical concepts and practical illustrations will help you learn quickly and deepen your financial and technical expertise.
Practically all donor countries that give aid claim to do so on the
basis on the recipient's good governance, but do these claims have
a real impact on the allocation of aid? Are democratic, human
rights-respecting, countries with low levels of corruption and
military expenditures actually likely to receive more aid than
other countries?
Machine learning (ML) is progressively reshaping the fields of quantitative finance and algorithmic trading. ML tools are increasingly adopted by hedge funds and asset managers, notably for alpha signal generation and stocks selection. The technicality of the subject can make it hard for non-specialists to join the bandwagon, as the jargon and coding requirements may seem out of reach. Machine Learning for Factor Investing: R Version bridges this gap. It provides a comprehensive tour of modern ML-based investment strategies that rely on firm characteristics. The book covers a wide array of subjects which range from economic rationales to rigorous portfolio back-testing and encompass both data processing and model interpretability. Common supervised learning algorithms such as tree models and neural networks are explained in the context of style investing and the reader can also dig into more complex techniques like autoencoder asset returns, Bayesian additive trees, and causal models. All topics are illustrated with self-contained R code samples and snippets that are applied to a large public dataset that contains over 90 predictors. The material, along with the content of the book, is available online so that readers can reproduce and enhance the examples at their convenience. If you have even a basic knowledge of quantitative finance, this combination of theoretical concepts and practical illustrations will help you learn quickly and deepen your financial and technical expertise.
This book focuses on quantitative survey methodology, data collection and cleaning methods. Providing starting tools for using and analyzing a file once a survey has been conducted, it addresses fields as diverse as advanced weighting, editing, and imputation, which are not well-covered in corresponding survey books. Moreover, it presents numerous empirical examples from the author's extensive research experience, particularly real data sets from multinational surveys.
Quantitative Modeling of Derivative Securities demonstrates how to take the basic ideas of arbitrage theory and apply them - in a very concrete way - to the design and analysis of financial products. Based primarily (but not exclusively) on the analysis of derivatives, the book emphasizes relative-value and hedging ideas applied to different financial instruments. Using a "financial engineering approach," the theory is developed progressively, focusing on specific aspects of pricing and hedging and with problems that the technical analyst or trader has to consider in practice. More than just an introductory text, the reader who has mastered the contents of this one book will have breached the gap separating the novice from the technical and research literature.
This concise textbook presents students with all they need for advancing in mathematical economics. Detailed yet student-friendly, Vohra's book contains chapters in, amongst others: * Feasibility Higher level undergraduates as well as postgraduate students in mathematical economics will find this book extremely useful in their development as economists.
This concise textbook presents students with all they need for advancing in mathematical economics. Detailed yet student-friendly, Vohra's book contains chapters in, amongst others: * Feasibility Higher level undergraduates as well as postgraduate students in mathematical economics will find this book extremely useful in their development as economists.
"A Companion to Theoretical Econometrics" provides a comprehensive
reference to the basics of econometrics. It focuses on the
foundations of the field and at the same time integrates popular
topics often encountered by practitioners. The chapters are written
by international experts and provide up-to-date research in areas
not usually covered by standard econometric texts.
This book is an exceptional reference for readers who require
quick access to the foundation theories in this field. Chapters are
organized to provide clear information and to point to further
readings on the subject. Important topics covered include:
Originally published in 1951, this volume reprints the classic work
written by one of the leading global econometricians.
Bayesian Econometrics introduces the reader to the use of Bayesian methods in the field of econometrics at the advanced undergraduate or graduate level. The book is self-contained and does not require previous training in econometrics. The focus is on models used by applied economists and the computational techniques necessary to implement Bayesian methods when doing empirical work. It includes numerous numerical examples and topics covered in the book include:
A website containing computer programs and data sets to help the student develop the computational skills of modern Bayesian econometrics can be found at: www.wiley.co.uk/koopbayesian
VENKATARAMA KRISHNAN, PhD, is Professor Emeritus in the Department of Electrical Engineering at the University of Massachusetts Lowell. Previously, he has taught at the Indian Institute of Science, Polytechnic University, the University of Pennsylvania, Princeton University, Villanova University, and Smith College. He also worked for two years (1974-1976) as a senior systems analyst for Dynamics Research Corporation on estimation problems associated with navigation and guidance and continued as their consultant for more than a decade. Professor Krishnan's research interests include estimation of steady-state queue distributions, tomographic imaging, biosystems, and digital, aerospace, control, communications, and stochastic systems. As a senior member of IEEE, Dr. Krishnan has authored three other books in addition to technical publications.
The book examines the development and the dynamics of the personal distribution of income in Germany, Great Britain, Sweden and the United States and some other OECD countries. Starting with the distribution of labour income, the issue is then expanded to include all monetary incomes of private households and to adjust for household size by an equivalence scale. Some authors analyse one country in detail by decomposing aggregate inequality measures, other authors focus on direct comparisons of some features of the income distribution in Germany with those in Great Britain or in the United States. The results suggest dominant influences of unemployment as well as of tax and transfer policies and different welfare regimes, respectively, but also show that our knowledge about distributional processes is still limited.
Originally published in 1991. The dilemma of solid and hazardous waste disposal in an environmentally safe manner has become a global problem. This book presents a modern approach to economic and operations research modelling in urban and regional waste management with an international perspective. Location and space economics are discussed along with transportation, technology, health hazards, capacity levels, political realities and the linkage with general global economic systems. The algorithms and models developed are then applied to two major cities in the world by way of case study example of the use of these systems.
Collecting and analyzing data on unemployment, inflation, and inequality help describe the complex world around us. When published by the government, such data are called official statistics. They are reported by the media, used by politicians to lend weight to their arguments, and by economic commentators to opine about the state of society. Despite such widescale use, explanations about how these measures are constructed are seldom provided for a non-technical reader. This Measuring Society book is a short, accessible guide to six topics: jobs, house prices, inequality, prices for goods and services, poverty, and deprivation. Each relates to concepts we use on a personal level to form an understanding of the society in which we live: We need a job, a place to live, and food to eat. Using data from the United States, we answer three basic questions: why, how, and for whom these statistics have been constructed. We add some context and flavor by discussing the historical background. This book provides the reader with a good grasp of these measures. Chaitra H. Nagaraja is an Associate Professor of Statistics at the Gabelli School of Business at Fordham University in New York. Her research interests include house price indices and inequality measurement. Prior to Fordham, Dr. Nagaraja was a researcher at the U.S. Census Bureau. While there, she worked on projects relating to the American Community Survey.
Originally published in 1979. An Input/output database is an information system carrying current data on the intermediate consumption of any product or service by all the specified major firms that consume it. This book begins with a survey of how the interrelationships of an economic system can be represented in a two-dimensional model which traces the output of each economic sector to all other sectors. It talks about how the use of such databases to identify major buyers and sellers can illuminate problems of economic policy at the national, regional, and corporate level and aid in analyzing factors affecting the control of inflation, energy use, transportation, and environmental pollution. The book discusses how advances in database technology, have brought to the fore such issues as the right to individual privacy, corporate secrecy, the public's right of access to stored data, and the use of such information for national planning in a free-enterprise society.
Originally published in 1970; with a second edition in 1989. Empirical Bayes methods use some of the apparatus of the pure Bayes approach, but an actual prior distribution is assumed to generate the data sequence. It can be estimated thus producing empirical Bayes estimates or decision rules. In this second edition, details are provided of the derivation and the performance of empirical Bayes rules for a variety of special models. Attention is given to the problem of assessing the goodness of an empirical Bayes estimator for a given set of prior data. Chapters also focus on alternatives to the empirical Bayes approach and actual applications of empirical Bayes methods.
Originally published in 1960 and 1966. This is an elementary introduction to the sources of economic statistics and their uses in answering economic questions. No mathematical knowledge is assumed, and no mathematical symbols are used. The book shows - by asking and answering a number of typical questions of applied economics - what the most useful statistics are, where they are found, and how they are to be interpreted and presented. The reader is introduced to the major British, European and American official sources, to the social accounts, to index numbers and averaging, and to elementary aids to inspection such as moving averages and scatter diagrams.
The need for analytics skills is a source of the burgeoning growth in the number of analytics and decision science programs in higher education developed to feed the need for capable employees in this area. The very size and continuing growth of this need means that there is still space for new program development. Schools wishing to pursue business analytics programs intentionally assess the maturity level of their programs and take steps to close the gap. Teaching Data Analytics: Pedagogy and Program Design is a reference for faculty and administrators seeking direction about adding or enhancing analytics offerings at their institutions. It provides guidance by examining best practices from the perspectives of faculty and practitioners. By emphasizing the connection of data analytics to organizational success, it reviews the position of analytics and decision science programs in higher education, and to review the critical connection between this area of study and career opportunities. The book features: A variety of perspectives ranging from the scholarly theoretical to the practitioner applied An in-depth look into a wide breadth of skills from closely technology-focused to robustly soft human connection skills Resources for existing faculty to acquire and maintain additional analytics-relevant skills that can enrich their current course offerings. Acknowledging the dichotomy between data analytics and data science, this book emphasizes data analytics rather than data science, although the book does touch upon the data science realm. Starting with industry perspectives, the book covers the applied world of data analytics, covering necessary skills and applications, as well as developing compelling visualizations. It then dives into pedagogical and program design approaches in data analytics education and concludes with ideas for program design tactics. This reference is a launching point for discussions about how to connect industry's need for skilled data analysts to higher education's need to design a rigorous curriculum that promotes student critical thinking, communication, and ethical skills. It also provides insight into adding new elements to existing data analytics courses and for taking the next step in adding data analytics offerings, whether it be incorporating additional analytics assignments into existing courses, offering one course designed for undergraduates, or an integrated program designed for graduate students.
|
You may like...
Design and Analysis of Time Series…
Richard McCleary, David McDowall, …
Hardcover
R3,286
Discovery Miles 32 860
Financial and Macroeconomic…
Francis X. Diebold, Kamil Yilmaz
Hardcover
R3,567
Discovery Miles 35 670
Introduction to Computational Economics…
Hans Fehr, Fabian Kindermann
Hardcover
R4,258
Discovery Miles 42 580
Tax Policy and Uncertainty - Modelling…
Christopher Ball, John Creedy, …
Hardcover
R2,987
Discovery Miles 29 870
Agent-Based Modeling and Network…
Akira Namatame, Shu-Heng Chen
Hardcover
R2,970
Discovery Miles 29 700
Pricing Decisions in the Euro Area - How…
Silvia Fabiani, Claire Loupias, …
Hardcover
R2,160
Discovery Miles 21 600
Handbook of Experimental Game Theory
C. M. Capra, Rachel T. A. Croson, …
Hardcover
R7,224
Discovery Miles 72 240
|