![]() |
![]() |
Your cart is empty |
||
Books > Business & Economics > Economics > Econometrics
In the twentieth century, Americans thought of the United States as a land of opportunity and equality. To what extent and for whom this was true was, of course, a matter of debate, however especially during the Cold War, many Americans clung to the patriotic conviction that America was the land of the free. At the same time, another national ideal emerged that was far less contentious, that arguably came to subsume the ideals of freedom, opportunity, and equality, and that eventually embodied an unspoken consensus about what constitutes the good society in a postmodern setting. This was the ideal of choice, broadly understood as the proposition that the good society provides individuals with the power to shape the contours of their lives in ways that suit their personal interests, idiosyncrasies, and tastes. By the closing decades of the century, Americans were widely agreed that theirs was-or at least should be-the land of choice. In A Destiny of Choice?, David Blanke and David Steigerwald bring together important scholarship on the tension between two leading interpretations of modern American consumer culture. That modern consumerism reflects the social, cultural, economic, and political changes that accompanied the country's transition from a local, producer economy dominated by limited choices and restricted credit to a national consumer marketplace based on the individual selection of mass-produced, mass-advertised, and mass-distributed goods. This debate is central to the economic difficulties seen in the United States today.
This two volume set is a collection of 30 classic papers presenting ideas which have now become standard in the field of Bayesian inference. Topics covered include the central field of statistical inference as well as applications to areas of probability theory, information theory, utility theory and computational theory. It is organized into seven sections: foundations, information theory and prior distributions; robustness and outliers; hierarchical, multivariate and non-parametric models; asymptotics; computations and Monte Carlo methods; and Bayesian econometrics.
High-Performance Computing (HPC) delivers higher computational performance to solve problems in science, engineering and finance. There are various HPC resources available for different needs, ranging from cloud computing- that can be used without much expertise and expense - to more tailored hardware, such as Field-Programmable Gate Arrays (FPGAs) or D-Wave's quantum computer systems. High-Performance Computing in Finance is the first book that provides a state-of-the-art introduction to HPC for finance, capturing both academically and practically relevant problems.
Thoroughly classroom tested, this introductory text covers all the topics that constitute a foundation for basic econometrics, with concise and intuitive explanations of technical material. Important proofs are shown in detail; however, the focus is on developing regression models and understanding the residual
World Inequality Report 2018 is the most authoritative and up-to-date account of global trends in inequality. Researched, compiled, and written by a team of the world’s leading economists of inequality, it presents—with unrivaled clarity and depth—information and analysis that will be vital to policy makers and scholars everywhere. Inequality has taken center stage in public debate as the wealthiest people in most parts of the world have seen their share of the economy soar relative to that of others, many of whom, especially in the West, have experienced stagnation. The resulting political and social pressures have posed harsh new challenges for governments and created a pressing demand for reliable data. The World Inequality Lab at the Paris School of Economics and the University of California, Berkeley, has answered this call by coordinating research into the latest trends in the accumulation and distribution of income and wealth on every continent. This inaugural report analyzes the Lab’s findings, which include data from major countries where information has traditionally been difficult to acquire, such as China, India, and Brazil. Among nations, inequality has been decreasing as traditionally poor countries’ economies have caught up with the West. The report shows, however, that inequality has been steadily deepening within almost every nation, though national trajectories vary, suggesting the importance of institutional and policy frameworks in shaping inequality. World Inequality Report 2018 will be a key document for anyone concerned about one of the most imperative and contentious subjects in contemporary politics and economics.
It is increasingly common for analysts to seek out the opinions of individuals and organizations using attitudinal scales such as degree of satisfaction or importance attached to an issue. Examples include levels of obesity, seriousness of a health condition, attitudes towards service levels, opinions on products, voting intentions, and the degree of clarity of contracts. Ordered choice models provide a relevant methodology for capturing the sources of influence that explain the choice made amongst a set of ordered alternatives. The methods have evolved to a level of sophistication that can allow for heterogeneity in the threshold parameters, in the explanatory variables (through random parameters), and in the decomposition of the residual variance. This book brings together contributions in ordered choice modeling from a number of disciplines, synthesizing developments over the last fifty years, and suggests useful extensions to account for the wide range of sources of influence on choice.
Non-market valuation has become a broadly accepted and widely practiced means of measuring the economic values of the environment and natural resources. In this book, the authors provide a guide to the statistical and econometric practices that economists employ in estimating non-market values.The authors develop the econometric models that underlie the basic methods: contingent valuation, travel cost models, random utility models and hedonic models. They analyze the measurement of non-market values as a procedure with two steps: the estimation of parameters of demand and preference functions and the calculation of benefits from the estimated models. Each of the models is carefully developed from the preference function to the behavioral or response function that researchers observe. The models are then illustrated with datasets that characterize the kinds of data researchers typically deal with. The real world data and clarity of writing in this book will appeal to environmental economists, students, researchers and practitioners in multilateral banks and government agencies.
Innovation remains an arduous and painful process for many companies, doing untold damage to brands, profitability, and careers. Some have used line extensions to mitigate risk, but all too often they have ended up extending the core brand into oblivion. Others have used test markets to help gauge opinion before a national rollout, only to have competitors snatch ideas and undermine results. Given the problems with conventional approaches, it's not surprising that 90% of new products and services fail. Market New Products Successfully is the definitive guidebook for using simulated test marketing (STM), a technology that can help companies dramatically improve the odds of introducing a successful new product or service. The book examines why STM is important, what the differences are between the major systems, how to do a simulation, and what insights it offers a marketing plan. It is the ultimate guidebook for any smart marketer looking to improve the financial outcome of the innovation process.
Statistics is used in two senses, singular and plural. In the singular, it concerns with the whole subject of statistics, as a branch of knowledge. In the plural sense, it relates to the numerical facts, data gathered systematically with some definite object in view. Thus, Statistics is the science, which deals with the collection, analysis and interpretation of data. An understanding of the logic and theory of statistics is essential for the students of agriculture who are expected to know the technique of analyzing numerical data and drawing useful conclusions. It is the intention of the author to keep the practical manual at a readability level at appropriate for students who do not have a mathematical background. This book has been prepared for the students and teachers as well to acquaint the basic concepts of statistical principles and procedures of calculations as per the syllabi of 5th Dean's committee of ICAR for undergraduate courses in agriculture and allied sciences.
This major volume of essays by Kenneth F. Wallis features 28 articles published over a quarter of a century on the statistical analysis of economic time series, large-scale macroeconometric modelling, and the interface between them.The first part deals with time-series econometrics and includes significant early contributions to the development of the LSE tradition in time-series econometrics, which is the dominant British tradition and has considerable influence worldwide. Later sections discuss theoretical and practical issues in modelling seasonality and forecasting with applications in both large-scale and small-scale models. The final section summarizes the research programme of the ESRC Macroeconomic Modelling Bureau, a unique comparison project among economy-wide macroeconometric models. Professor Wallis has written a detailed introduction to the papers in this volume in which he explains the background to these papers and comments on subsequent developments.
The present book has been well prepared to meet the requirements of the students of Animal and Veterinary Science, Animal Biotechnology and other related fields. The book will serve as a text book not only for students in Veterinary science but also for those who want to know "What statistics in all about" or who need to be familiar with at least the language and fundamental concepts of statistics. The book will serve well to build necessary background for those who will take more advanced courses in statistics including the specialized applications. The salient features are: The book has been designed in accordance with the new VCI syllabus, 2016 (MSVE-2016). The book will be very useful for students of SAU's/ICAR institutes and those preparing for JRF/SRF/various competitive examinations. Each chapter of this book contains complete self explanatory theory and a fairly number of solved examples. Solved examples for each topic are given in an elegant and more interesting way to make the users understand them easily. Subject matter has been explained in a simple way that the students can easily understand and feel encouraged to solve questions themselves given in unsolved problems.
This book develops the theory of productivity measurement using the empirical index number approach. The theory uses multiplicative indices and additive indicators as measurement tools, instead of relying on the usual neo-classical assumptions, such as the existence of a production function characterized by constant returns to scale, optimizing behavior of the economic agents, and perfect foresight. The theory can be applied to all the common levels of aggregation (micro, meso, and macro), and half of the book is devoted to accounting for the links existing between the various levels. Basic insights from National Accounts are thereby used. The final chapter is devoted to the decomposition of productivity change into the contributions of efficiency change, technological change, scale effects, and input or output mix effects. Applications on real-life data demonstrate the empirical feasibility of the theory. The book is directed to a variety of overlapping audiences: statisticians involved in measuring productivity change; economists interested in growth accounting; researchers relating macro-economic productivity change to its industrial sources; enterprise micro-data researchers; and business analysts interested in performance measurement.
This book presents a unique collection of contributions on modern topics in statistics and econometrics, written by leading experts in the respective disciplines and their intersections. It addresses nonparametric statistics and econometrics, quantiles and expectiles, and advanced methods for complex data, including spatial and compositional data, as well as tools for empirical studies in economics and the social sciences. The book was written in honor of Christine Thomas-Agnan on the occasion of her 65th birthday. Given its scope, it will appeal to researchers and PhD students in statistics and econometrics alike who are interested in the latest developments in their field.
Social media has made charts, infographics and diagrams ubiquitous-and easier to share than ever. While such visualisations can better inform us, they can also deceive by displaying incomplete or inaccurate data, suggesting misleading patterns-or misinform by being poorly designed. Many of us are ill equipped to interpret the visuals that politicians, journalists, advertisers and even employers present each day, enabling bad actors to easily manipulate visuals to promote their own agendas. Public conversations are increasingly driven by numbers and to make sense of them, we must be able to decode and use visual information. By examining contemporary examples ranging from election-result infographics to global GDP maps and box-office record charts, How Charts Lie teaches us how to do just that.
This volume of Advances in Econometrics contains articles that examine key topics in the modeling and estimation of dynamic stochastic general equilibrium (DSGE) models. Because DSGE models combine micro- and macroeconomic theory with formal econometric modeling and inference, over the past decade they have become an established framework for analyzing a variety of issues in empirical macroeconomics. The research articles make contributions in several key areas in DSGE modeling and estimation. In particular, papers cover the modeling and role of expectations, the study of optimal monetary policy in two-country models, and the problem of non-invertibility. Other interesting areas of inquiry include the analysis of parameter identification in new open economy macroeconomic models and the modeling of trend inflation shocks. The second part of the volume is devoted to articles that offer innovations in econometric methodology. These papers advance new techniques for addressing major inferential problems and include discussion and applications of Laplace-type, frequency domain, empirical likelihood and method of moments estimators.
This monograph addresses the methodological and empirical issues relevant for the development of sustainable agriculture, with a particular focus on Eastern Europe. It relates economic growth to the other dimensions of sustainability by applying integrated methods. The book comprises five chapters dedicated to the theoretical approaches towards sustainable rural development, productivity analysis, structural change analysis and environmental footprint. The book focuses on the transformations of the agricultural sector while taking into account economic, environmental, and social dynamics. The importance of agricultural transformations to the livelihood of the rural population and food security are highlighted. Further, advanced methodologies and frameworks are presented to fathom the underlying trends in different facets of agricultural production. The authors present statistical methods used for the analysis of agricultural sustainability along with applications for agriculture in the European Union. Additionally, they discuss the measures of efficiency, methodological approaches and empirical models. Finally, the book applies econometric and optimization techniques, which are useful for the estimation of the production functions and other representations of technology in the case of the European Union member states. Therefore, the book is a must-read for researchers and students of agricultural and production economics, as well as policy-makers and academia in general.
This book analyzes the following four distinct, although not dissimilar, areas of social choice theory and welfare economics: nonstrategic choice, Harsanyi's aggregation theorems, distributional ethics and strategic choice. While for aggregation of individual ranking of social states, whether the persons behave strategically or non-strategically, the decision making takes place under complete certainty; in the Harsanyi framework uncertainty has a significant role in the decision making process. Another ingenious characteristic of the book is the discussion of ethical approaches to evaluation of inequality arising from unequal distributions of achievements in the different dimensions of human well-being. Given its wide coverage, combined with newly added materials, end-chapter problems and bibliographical notes, the book will be helpful material for students and researchers interested in this frontline area research. Its lucid exposition, along with non-technical and graphical illustration of the concepts, use of numerical examples, makes the book a useful text.
Master key spreadsheet and business analytics skills with SPREADSHEET MODELING AND DECISION ANALYSIS: A PRACTICAL INTRODUCTION TO BUSINESS ANALYTICS, 9E, written by respected business analytics innovator Cliff Ragsdale. This edition's clear presentation, realistic examples, fascinating topics and valuable software provide everything you need to become proficient in today's most widely used business analytics techniques using the latest version of Excel (R) in Microsoft (R) Office 365 or Office 2019. Become skilled in the newest Excel functions as well as Analytic Solver (R) and Data Mining add-ins. This edition helps you develop both algebraic and spreadsheet modeling skills. Step-by-step instructions and annotated, full-color screen images make examples easy to follow and show you how to apply what you learn about descriptive, predictive and prescriptive analytics to real business situations. WebAssign online tools and author-created videos further strengthen understanding.
This volume in Advances in Econometrics showcases fresh methodological and empirical research on the econometrics of networks. Comprising both theoretical, empirical and policy papers, the authors bring together a wide range of perspectives to facilitate a dialogue between academics and practitioners for better understanding this groundbreaking field and its role in policy discussions. This edited collection includes thirteen chapters which covers various topics such as identification of network models, network formation, networks and spatial econometrics and applications of financial networks. Readers can also learn about network models with different types of interactions, sample selection in social networks, trade networks, stochastic dynamic programming in space, spatial panels, survival and networks, financial contagion, spillover effects, interconnectedness on consumer credit markets and a financial risk meter. The topics covered in the book, centered on the econometrics of data and models, are a valuable resource for graduate students and researchers in the field. The collection is also useful for industry professionals and data scientists due its focus on theoretical and applied works.
This volume presents new methods and applications in longitudinal data estimation methodology in applied economic. Featuring selected papers from the 2020 the International Conference on Applied Economics (ICOAE 2020) held virtually due to the corona virus pandemic, this book examines interdisciplinary topics such as financial economics, international economics, agricultural economics, marketing and management. Country specific case studies are also featured.
The Council of the European Union is the institutional heart of EU policy-making. But 'who gets what, when and how' in the Council? What are the dimensions of political conflict, and which countries form coalitions in the intense negotiations to achieve their desired policy outcomes? Focussing on collective decision-making in the Council between 1998 and 2007, this book provides a comprehensive account of these salient issues that lie at the heart of political accountability and legitimacy in the European Union. Based on a novel and unique dataset of estimates of government policy positions, salience and power in influencing deliberations, an explanatory model approximating the Nash-Bargaining solution is employed to predict the policy outcomes on ten policy domains of central importance to this institution. The book's analyses comprise investigations into the determinants of decision-making success, the architecture of the political space and the governments' coalition behavior.
The chapters in this book describe various aspects of the application of statistical methods in finance. It will interest and attract statisticians to this area, illustrate some of the many ways that statistical tools are used in financial applications, and give some indication of problems which are still outstanding. The statisticians will be stimulated to learn more about the kinds of models and techniques outlined in the book - both the domain of finance and the science of statistics will benefit from increased awareness by statisticians of the problems, models, and techniques applied in financial applications. For this reason, extensive references are given. The level of technical detail varies between the chapters. Some present broad non-technical overviews of an area, while others describe the mathematical niceties. This illustrates both the range of possibilities available in the area for statisticians, while simultaneously giving a flavour of the different kinds of mathematical and statistical skills required. Whether you favour data analysis or mathematical manipulation, if you are a statistician there are problems in finance which are appropriate to your skills.
This book gives a thorough and systematic introduction to the latest research results about fuzzy decision-making method based on prospect theory. It includes eight chapters: Introduction, Intuitionistic fuzzy MADM based on prospect theory, QUALIFLEX based on prospect theory with probabilistic linguistic information, Group PROMETHEE based on prospect theory with hesitant fuzzy linguistic information, Prospect consensus with probabilistic hesitant fuzzy preference information, Improved TODIM based on prospect theory and the improved TODIM with probabilistic hesitant fuzzy information, etc. This book is suitable for the researchers in the fields of fuzzy mathematics, operations research, behavioral science, management science and engineering, etc. It is also useful as a textbook for postgraduate and senior-year undergraduate students of the relevant professional institutions of higher learning.
This book scientifically tests the assertion that accommodative monetary policy can eliminate the "crowd out" problem, allowing fiscal stimulus programs (such as tax cuts or increased government spending) to stimulate the economy as intended. It also tests to see if natural growth in th economy can cure the crowd out problem as well or better. The book is intended to be the largest scale scientific test ever performed on this topic. It includes about 800 separate statistical tests on the U.S. economy testing different parts or all of the period 1960 - 2010. These tests focus on whether accommodative monetary policy, which increases the pool of loanable resources, can offset the crowd out problem as well as natural growth in the economy. The book, employing the best scientific methods available to economists for this type of problem, concludes accommodate monetary policy could have, but until the quantitative easing program, Federal Reserve efforts to accommodate fiscal stimulus programs were not large enough to offset more than 23% to 44% of any one year's crowd out problem. That provides the science part of the answer as to why accommodative monetary policy didn't accommodate: too little of it was tried. The book also tests whether other increases in loanable funds, occurring because of natural growth in the economy or changes in the savings rate can also offset crowd out. It concludes they can, and that these changes tend to be several times as effective as accommodative monetary policy. This book's companion volume Why Fiscal Stimulus Programs Fail explores the policy implications of these results.
In recent years econometricians have examined the problems of diagnostic testing, specification testing, semiparametric estimation and model selection. In addition researchers have considered whether to use model testing and model selection procedures to decide the models that best fit a particular dataset. This book explores both issues with application to various regression models, including the arbitrage pricing theory models. It is ideal as a reference for statistical sciences postgraduate students, academic researchers and policy makers in understanding the current status of model building and testing techniques. |
![]() ![]() You may like...
Operations And Supply Chain Management
David Collier, James Evans
Hardcover
Statistics for Business & Economics…
James McClave, P Benson, …
Paperback
R2,601
Discovery Miles 26 010
Contemporary Project Management…
Timothy Kloppenborg, Kathryn Wells, …
Paperback
Handbook of Research Methods and…
Nigar Hashimzade, Michael A. Thornton
Hardcover
R8,286
Discovery Miles 82 860
Tax Policy and Uncertainty - Modelling…
Christopher Ball, John Creedy, …
Hardcover
R2,791
Discovery Miles 27 910
Introductory Econometrics - A Modern…
Jeffrey Wooldridge
Hardcover
|