![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics
Born of a belief that economic insights should not require much mathematical sophistication, this book proposes novel and parsimonious methods to incorporate ignorance and uncertainty into economic modeling, without complex mathematics. Economics has made great strides over the past several decades in modeling agents' decisions when they are incompletely informed, but many economists believe that there are aspects of these models that are less than satisfactory. Among the concerns are that ignorance is not captured well in most models, that agents' presumed cognitive ability is implausible, and that derived optimal behavior is sometimes driven by the fine details of the model rather than the underlying economics. Compte and Postlewaite lay out a tractable way to address these concerns, and to incorporate plausible limitations on agents' sophistication. A central aspect of the proposed methodology is to restrict the strategies assumed available to agents.
Over the past two decades, experimental economics has moved from a fringe activity to become a standard tool for empirical research. With experimental economics now regarded as part of the basic tool-kit for applied economics, this book demonstrates how controlled experiments can be a useful in providing evidence relevant to economic research. Professors Jacquemet and L'Haridon take the standard model in applied econometrics as a basis to the methodology of controlled experiments. Methodological discussions are illustrated with standard experimental results. This book provides future experimental practitioners with the means to construct experiments that fit their research question, and new comers with an understanding of the strengths and weaknesses of controlled experiments. Graduate students and academic researchers working in the field of experimental economics will be able to learn how to undertake, understand and criticise empirical research based on lab experiments, and refer to specific experiments, results or designs completed with case study applications.
Computational finance is increasingly important in the financial industry, as a necessary instrument for applying theoretical models to real-world challenges. Indeed, many models used in practice involve complex mathematical problems, for which an exact or a closed-form solution is not available. Consequently, we need to rely on computational techniques and specific numerical algorithms. This book combines theoretical concepts with practical implementation. Furthermore, the numerical solution of models is exploited, both to enhance the understanding of some mathematical and statistical notions, and to acquire sound programming skills in MATLAB (R), which is useful for several other programming languages also. The material assumes the reader has a relatively limited knowledge of mathematics, probability, and statistics. Hence, the book contains a short description of the fundamental tools needed to address the two main fields of quantitative finance: portfolio selection and derivatives pricing. Both fields are developed here, with a particular emphasis on portfolio selection, where the author includes an overview of recent approaches. The book gradually takes the reader from a basic to medium level of expertise by using examples and exercises to simplify the understanding of complex models in finance, giving them the ability to place financial models in a computational setting. The book is ideal for courses focusing on quantitative finance, asset management, mathematical methods for economics and finance, investment banking, and corporate finance.
This is a thorough exploration of the models and methods of financial econometrics by one of the world's leading financial econometricians and is for students in economics, finance, statistics, mathematics, and engineering who are interested in financial applications. Based on courses taught around the world, the up-to-date content covers developments in econometrics and finance over the last twenty years while ensuring a solid grounding in the fundamental principles of the field. Care has been taken to link theory and application to provide real-world context for students. Worked exercises and empirical examples have also been included to make sure complicated concepts are solidly explained and understood.
This second edition retains the positive features of being clearly written, well organized, and incorporating calculus in the text, while adding expanded coverage on game theory, experimental economics, and behavioural economics. It remains more focused and manageable than similar textbooks, and provides a concise yet comprehensive treatment of the core topics of microeconomics, including theories of the consumer and of the firm, market structure, partial and general equilibrium, and market failures caused by public goods, externalities and asymmetric information. The book includes helpful solved problems in all the substantive chapters, as well as over seventy new mathematical exercises and enhanced versions of the ones in the first edition. The authors make use of the book's full color with sharp and helpful graphs and illustrations. This mathematically rigorous textbook is meant for students at the intermediate level who have already had an introductory course in microeconomics, and a calculus course.
Game theory has revolutionised our understanding of industrial organisation and the traditional theory of the firm. Despite these advances, industrial economists have tended to rely on a restricted set of tools from game theory, focusing on static and repeated games to analyse firm structure and behaviour. Luca Lambertini, a leading expert on the application of differential game theory to economics, argues that many dynamic phenomena in industrial organisation (such as monopoly, oligopoly, advertising, R&D races) can be better understood and analysed through the use of differential games. After illustrating the basic elements of the theory, Lambertini guides the reader through the main models, spanning from optimal control problems describing the behaviour of a monopolist through to oligopoly games in which firms' strategies include prices, quantities and investments. This approach will be of great value to students and researchers in economics and those interested in advanced applications of game theory.
Random set theory is a fascinating branch of mathematics that amalgamates techniques from topology, convex geometry, and probability theory. Social scientists routinely conduct empirical work with data and modelling assumptions that reveal a set to which the parameter of interest belongs, but not its exact value. Random set theory provides a coherent mathematical framework to conduct identification analysis and statistical inference in this setting and has become a fundamental tool in econometrics and finance. This is the first book dedicated to the use of the theory in econometrics, written to be accessible for readers without a background in pure mathematics. Molchanov and Molinari define the basics of the theory and illustrate the mathematical concepts by their application in the analysis of econometric models. The book includes sets of exercises to accompany each chapter as well as examples to help readers apply the theory effectively.
Interest in the skew-normal and related families of distributions has grown enormously over recent years, as theory has advanced, challenges of data have grown, and computational tools have made substantial progress. This comprehensive treatment, blending theory and practice, will be the standard resource for statisticians and applied researchers. Assuming only basic knowledge of (non-measure-theoretic) probability and statistical inference, the book is accessible to the wide range of researchers who use statistical modelling techniques. Guiding readers through the main concepts and results, it covers both the probability and the statistics sides of the subject, in the univariate and multivariate settings. The theoretical development is complemented by numerous illustrations and applications to a range of fields including quantitative finance, medical statistics, environmental risk studies, and industrial and business efficiency. The author's freely available R package sn, available from CRAN, equips readers to put the methods into action with their own data.
In contrast to mainstream economics, complexity theory conceives the economy as a complex system of heterogeneous interacting agents characterised by limited information and bounded rationality. Agent Based Models (ABMs) are the analytical and computational tools developed by the proponents of this emerging methodology. Aimed at students and scholars of contemporary economics, this book includes a comprehensive toolkit for agent-based computational economics, now quickly becoming the new way to study evolving economic systems. Leading scholars in the field explain how ABMs can be applied fruitfully to many real-world economic examples and represent a great advancement over mainstream approaches. The essays discuss the methodological bases of agent-based approaches and demonstrate step-by-step how to build, simulate and analyse ABMs and how to validate their outputs empirically using the data. They also present a wide set of applications of these models to key economic topics, including the business cycle, labour markets, and economic growth.
Structural vector autoregressive (VAR) models are important tools for empirical work in macroeconomics, finance, and related fields. This book not only reviews the many alternative structural VAR approaches discussed in the literature, but also highlights their pros and cons in practice. It provides guidance to empirical researchers as to the most appropriate modeling choices, methods of estimating, and evaluating structural VAR models. The book traces the evolution of the structural VAR methodology and contrasts it with other common methodologies, including dynamic stochastic general equilibrium (DSGE) models. It is intended as a bridge between the often quite technical econometric literature on structural VAR modeling and the needs of empirical researchers. The focus is not on providing the most rigorous theoretical arguments, but on enhancing the reader's understanding of the methods in question and their assumptions. Empirical examples are provided for illustration.
Now in its fifth edition, this book offers a detailed yet concise introduction to the growing field of statistical applications in finance. The reader will learn the basic methods for evaluating option contracts, analyzing financial time series, selecting portfolios and managing risks based on realistic assumptions about market behavior. The focus is both on the fundamentals of mathematical finance and financial time series analysis, and on applications to specific problems concerning financial markets, thus making the book the ideal basis for lectures, seminars and crash courses on the topic. All numerical calculations are transparent and reproducible using quantlets. For this new edition the book has been updated and extensively revised and now includes several new aspects such as neural networks, deep learning, and crypto-currencies. Both R and Matlab code, together with the data, can be downloaded from the book's product page and the Quantlet platform. The Quantlet platform quantlet.de, quantlet.com, quantlet.org is an integrated QuantNet environment consisting of different types of statistics-related documents and program codes. Its goal is to promote reproducibility and offer a platform for sharing validated knowledge native to the social web. QuantNet and the corresponding Data-Driven Documents-based visualization allow readers to reproduce the tables, pictures and calculations inside this Springer book. "This book provides an excellent introduction to the tools from probability and statistics necessary to analyze financial data. Clearly written and accessible, it will be very useful to students and practitioners alike." Yacine Ait-Sahalia, Otto Hack 1903 Professor of Finance and Economics, Princeton University
This is the first of two volumes containing papers and commentaries presented at the Eleventh World Congress of the Econometric Society, held in Montreal, Canada in August 2015. These papers provide state-of-the-art guides to the most important recent research in economics. The book includes surveys and interpretations of key developments in economics and econometrics, and discussion of future directions for a wide variety of topics, covering both theory and application. These volumes provide a unique, accessible survey of progress on the discipline, written by leading specialists in their fields. The first volume includes theoretical and applied papers addressing topics such as dynamic mechanism design, agency problems, and networks.
Factor models have become the most successful tool in the analysis and forecasting of high-dimensional time series. This monograph provides an extensive account of the so-called General Dynamic Factor Model methods. The topics covered include: asymptotic representation problems, estimation, forecasting, identification of the number of factors, identification of structural shocks, volatility analysis, and applications to macroeconomic and financial data.
A number of clubs in professional sports leagues exhibit winning streaks over a number of consecutive seasons that do not conform to the standard economic model of a professional sports league developed by El Hodiri and Quirk (1994) and Fort and Quirk (1995). These clubs seem to display what we call "unsustainable runs," defined as a period of two to four seasons where the club acquires expensive talent and attempts to win a league championship despite not having the market size to sustain such a competitive position in the long run. The standard model predicts that clubs that locate in large economic markets will tend to acquire more talent and achieve more success on the field and at the box office than clubs that are located in small markets.This book builds a model that can allow for unsustainable runs yet retain most of the features of the standard model then subjects it to empirical verification. The new model we develop in the book has as its central feature the ability to generate two equilibria for a club under certain conditions. In the empirical sections of the book, we use time-series analysis to attempt to test for the presence of unsustainable runs using historical data from National Football League (NFL), National Basketball Association (NBA), National Hockey League (NHL) and Major League Baseball (MLB). The multiple equilibria model retains all of the features of the standard model of a professional sports league that is accepted quite universally by economists, yet it offers a much richer approach by including an exploration of the effects of revenues that are earned at the league level (television, apparel, naming rights, etc.) that are then shared by all of the member clubs, making this book very unique and of great interest to scholars in a variety of fields in economics.
This book presents eleven classic papers by the late Professor Suzanne Scotchmer with introductions by leading economists and legal scholars. This book introduces Scotchmer's life and work; analyses her pioneering contributions to the economics of patents and innovation incentives, with a special focus on the modern theory of cumulative innovation; and describes her pioneering work on law and economics, evolutionary game theory, and general equilibrium/club theory. This book also provides a self-contained introduction to students who want to learn more about the various fields that Professor Scotchmer worked in, with a particular focus on patent incentives and cumulative innovation.
This book develops a machine-learning framework for predicting economic growth. It can also be considered as a primer for using machine learning (also known as data mining or data analytics) to answer economic questions. While machine learning itself is not a new idea, advances in computing technology combined with a dawning realization of its applicability to economic questions makes it a new tool for economists.
Institutions are the formal or informal 'rules of the game' that facilitate economic, social, and political interactions. These include such things as legal rules, property rights, constitutions, political structures, and norms and customs. The main theoretical insights from Austrian economics regarding private property rights and prices, entrepreneurship, and spontaneous order mechanisms play a key role in advancing institutional economics. The Austrian economics framework provides an understanding for which institutions matter for growth, how they matter, and how they emerge and can change over time. Specifically, Austrians have contributed significantly to the areas of institutional stickiness and informal institutions, self-governance and self-enforcing contracts, institutional entrepreneurship, and the political infrastructure for development.
This volume presents classical results of the theory of enlargement of filtration. The focus is on the behavior of martingales with respect to the enlarged filtration and related objects. The study is conducted in various contexts including immersion, progressive enlargement with a random time and initial enlargement with a random variable. The aim of this book is to collect the main mathematical results (with proofs) previously spread among numerous papers, great part of which is only available in French. Many examples and applications to finance, in particular to credit risk modelling and the study of asymmetric information, are provided to illustrate the theory. A detailed summary of further connections and applications is given in bibliographic notes which enables to deepen study of the topic. This book fills a gap in the literature and serves as a guide for graduate students and researchers interested in the role of information in financial mathematics and in econometric science. A basic knowledge of the general theory of stochastic processes is assumed as a prerequisite.
This comprehensive book is an introduction to multilevel Bayesian models in R using brms and the Stan programming language. Featuring a series of fully worked analyses of repeated-measures data, focus is placed on active learning through the analyses of the progressively more complicated models presented throughout the book. In this book, the authors offer an introduction to statistics entirely focused on repeated measures data beginning with very simple two-group comparisons and ending with multinomial regression models with many 'random effects'. Across 13 well-structured chapters, readers are provided with all the code necessary to run all the analyses and make all the plots in the book, as well as useful examples of how to interpret and write-up their own analyses. This book provides an accessible introduction for readers in any field, with any level of statistical background. Senior undergraduate students, graduate students, and experienced researchers looking to 'translate' their skills with more traditional models to a Bayesian framework, will benefit greatly from the lessons in this text.
Explosive growth in computing power has made Bayesian methods for infinite-dimensional models - Bayesian nonparametrics - a nearly universal framework for inference, finding practical use in numerous subject areas. Written by leading researchers, this authoritative text draws on theoretical advances of the past twenty years to synthesize all aspects of Bayesian nonparametrics, from prior construction to computation and large sample behavior of posteriors. Because understanding the behavior of posteriors is critical to selecting priors that work, the large sample theory is developed systematically, illustrated by various examples of model and prior combinations. Precise sufficient conditions are given, with complete proofs, that ensure desirable posterior properties and behavior. Each chapter ends with historical notes and numerous exercises to deepen and consolidate the reader's understanding, making the book valuable for both graduate students and researchers in statistics and machine learning, as well as in application areas such as econometrics and biostatistics.
Most academic and policy commentary represents adverse selection as a severe problem in insurance, which should always be deprecated, avoided or minimised. This book gives a contrary view. It details the exaggeration of adverse selection in insurers' rhetoric and insurance economics, and presents evidence that in many insurance markets, adverse selection is weaker than most commentators suggest. A novel arithmetical argument shows that from a public policy perspective, 'weak' adverse selection can be a good thing. This is because a degree of adverse selection is needed to maximise 'loss coverage', the expected fraction of the population's losses which is compensated by insurance. This book will be valuable for those interested in public policy arguments about insurance and discrimination: academics (in economics, law and social policy), policymakers, actuaries, underwriters, disability activists, geneticists and other medical professionals.
One of the major problems of macroeconomic theory is the way in which the people exchange goods in decentralized market economies. There are major disagreements among macroeconomists regarding tools to influence required outcomes. Since the mainstream efficient market theory fails to provide an internal coherent framework, there is a need for an alternative theory. The book provides an innovative approach for the analysis of agent based models, populated by the heterogeneous and interacting agents in the field of financial fragility. The text is divided in two parts; the first presents analytical developments of stochastic aggregation and macro-dynamics inference methods. The second part introduces macroeconomic models of financial fragility for complex systems populated by heterogeneous and interacting agents. The concepts of financial fragility and macroeconomic dynamics are explained in detail in separate chapters. The statistical physics approach is applied to explain theories of macroeconomic modelling and inference.
Statistics for Business is meant as a textbook for students in business, computer science, bioengineering, environmental technology, and mathematics. In recent years, business statistics is used widely for decision making in business endeavours. It emphasizes statistical applications, statistical model building, and determining the manual solution methods. Special Features: This text is prepared based on "self-taught" method. For most of the methods, the required algorithm is clearly explained using flow-charting methodology. More than 200 solved problems provided. More than 175 end-of-chapter exercises with answers are provided. This allows teachers ample flexibility in adopting the textbook to their individual class plans. This textbook is meant to for beginners and advanced learners as a text in Statistics for Business or Applied Statistics for undergraduate and graduate students.
Davidson and MacKinnon have written an outstanding textbook for graduates in econometrics, covering both basic and advanced topics and using geometrical proofs throughout for clarity of exposition. The book offers a unified theoretical perspective, and emphasizes the practical applications of modern theory.
Originally published in 1931, this book was written to provide actuarial students with a guide to mathematics, with information on elementary trigonometry, finite differences, summation, differential and integral calculus, and probability. Examples are included throughout. This book will be of value to anyone with an interest in actuarial practice and its relationship with aspects of mathematics. |
You may like...
Numerology For The Beginner - Master the…
Michelle Northrup
Hardcover
|