Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Business & Economics > Economics > Econometrics
Pioneered by American economist Paul Samuelson, revealed preference theory is based on the idea that the preferences of consumers are revealed in their purchasing behavior. Researchers in this field have developed complex and sophisticated mathematical models to capture the preferences that are 'revealed' through consumer choice behavior. This study of consumer demand and behavior is closely tied up with econometrics (especially nonparametric econometrics), where testing the validity of different theoretical models is an important aspect of research. The theory of revealed preference has a very long and distinguished tradition in economics, but there was no systematic presentation of the theory until now. This book deals with basic questions in economic theory, such as the relation between theory and data, and studies the situations in which empirical observations are consistent or inconsistent with some of the best known theories in economics.
"Bayesian Econometrics" illustrates the scope and diversity of modern applications, reviews some recent advances, and highlights many desirable aspects of inference and computations. It begins with an historical overview by Arnold Zellner who describes key contributions to development and makes predictions for future directions. In the second paper, Giordani and Kohn makes suggestions for improving Markov chain Monte Carlo computational strategies. The remainder of the book is categorized according to microeconometric and time-series modeling. Models considered include an endogenous selection ordered probit model, a censored treatment-response model, equilibrium job search models and various other types. These are used to study a variety of applications for example dental insurance and care, educational attainment, voter opinions and the marketing share of various brands and an aggregate cross-section production function. Models and topics considered include the potential problem of improper posterior densities in a variety of dynamic models, selection and averaging for forecasting with vector autoregressions, a consumption capital-asset pricing model and various others. Applications involve U.S. macroeconomic variables, exchange rates, an investigation of purchasing power parity, data from London Metals Exchange, international automobile production data, and data from the Asian stock market.
Microeconometrics Using Stata, Second Edition is an invaluable reference for researchers and students interested in applied microeconometric methods. Like previous editions, this text covers all the classic microeconometric techniques ranging from linear models to instrumental-variables regression to panel-data estimation to nonlinear models such as probit, tobit, Poisson, and choice models. Each of these discussions has been updated to show the most modern implementation in Stata, and many include additional explanation of the underlying methods. In addition, the authors introduce readers to performing simulations in Stata and then use simulations to illustrate methods in other parts of the book. They even teach you how to code your own estimators in Stata. The second edition is greatly expanded—the new material is so extensive that the text now comprises two volumes. In addition to the classics, the book now teaches recently developed econometric methods and the methods newly added to Stata. Specifically, the book includes entirely new chapters on duration models randomized control trials and exogenous treatment effects endogenous treatment effects models for endogeneity and heterogeneity, including finite mixture models, structural equation models, and nonlinear mixed-effects models spatial autoregressive models semiparametric regression lasso for prediction and inference Bayesian analysis Anyone interested in learning classic and modern econometric methods will find this the perfect companion. And those who apply these methods to their own data will return to this reference over and over as they need to implement the various techniques described in this book.
This book is an introduction to regression analysis, focusing on the practicalities of doing regression analysis on real-life data. Contrary to other textbooks on regression, this book is based on the idea that you do not necessarily need to know much about statistics and mathematics to get a firm grip on regression and perform it to perfection. This non-technical point of departure is complemented by practical examples of real-life data analysis using statistics software such as Stata, R and SPSS. Parts 1 and 2 of the book cover the basics, such as simple linear regression, multiple linear regression, how to interpret the output from statistics programs, significance testing and the key regression assumptions. Part 3 deals with how to practically handle violations of the classical linear regression assumptions, regression modeling for categorical y-variables and instrumental variable (IV) regression. Part 4 puts the various purposes of, or motivations for, regression into the wider context of writing a scholarly report and points to some extensions to related statistical techniques. This book is written primarily for those who need to do regression analysis in practice, and not only to understand how this method works in theory. The book's accessible approach is recommended for students from across the social sciences.
This ambitious book looks 'behind the model' to reveal how economists use formal models to generate insights into the economy. Drawing on recent work in the philosophy of science and economic methodology, the book presents a novel framework for understanding the logic of economic modeling. It also reveals the ways in which economic models can mislead rather than illuminate. Importantly, the book goes beyond purely negative critique, proposing a concrete program of methodological reform to better equip economists to detect potential mismatches between their models and the targets of their inquiry. Ranging across economics, philosophy, and social science methods, and drawing on a variety of examples, including the recent financial crisis, Behind the Model will be of interest to anyone who has wondered how economics works - and why it sometimes fails so spectacularly.
The Handbook of U.S. Labor Statistics is recognized as an authoritative resource on the U.S. labor force. It continues and enhances the Bureau of Labor Statistics's (BLS) discontinued publication, Labor Statistics. It allows the user to understand recent developments as well as to compare today's economy with past history. The 24th edition includes the new employment projections from 2019 to 2029. New projections are only released every two years. The Handbook is a comprehensive reference providing an abundance of data on a variety of topics including: Employment and unemployment; Earnings; Prices; Productivity; Consumer expenditures; Occupational safety and health; Union membership; Working poor Recent trends in the labor force And much more! Features of the publication In addition to over 215 tables that present practical data, the Handbook provides: Introductory material for each chapter that contains highlights of salient data and figures that call attention to noteworthy trends in the data Notes and definitions, which contain concise descriptions of the data sources, concepts, definitions, and methodology from which the data are derived References to more comprehensive reports which provide additional data and more extensive descriptions of estimation methods, sampling, and reliability measures
Nonlinear models have been used extensively in the areas of economics and finance. Recent literature on the topic has shown that a large number of series exhibit nonlinear dynamics as opposed to the alternative--linear dynamics. Incorporating these concepts involves deriving and estimating nonlinear time series models, and these have typically taken the form of Threshold Autoregression (TAR) models, Exponential Smooth Transition (ESTAR) models, and Markov Switching (MS) models, among several others. This edited volume provides a timely overview of nonlinear estimation techniques, offering new methods and insights into nonlinear time series analysis. It features cutting-edge research from leading academics in economics, finance, and business management, and will focus on such topics as Zero-Information-Limit-Conditions, using Markov Switching Models to analyze economics series, and how best to distinguish between competing nonlinear models. Principles and techniques in this book will appeal to econometricians, finance professors teaching quantitative finance, researchers, and graduate students interested in learning how to apply advances in nonlinear time series modeling to solve complex problems in economics and finance.
The book provides a description of the process of health economic evaluation and modelling for cost-effectiveness analysis, particularly from the perspective of a Bayesian statistical approach. Some relevant theory and introductory concepts are presented using practical examples and two running case studies. The book also describes in detail how to perform health economic evaluations using the R package BCEA (Bayesian Cost-Effectiveness Analysis). BCEA can be used to post-process the results of a Bayesian cost-effectiveness model and perform advanced analyses producing standardised and highly customisable outputs. It presents all the features of the package, including its many functions and their practical application, as well as its user-friendly web interface. The book is a valuable resource for statisticians and practitioners working in the field of health economics wanting to simplify and standardise their workflow, for example in the preparation of dossiers in support of marketing authorisation, or academic and scientific publications.
This book provides the tools and concepts necessary to study the
behavior of econometric estimators and test statistics in large
samples. An econometric estimator is a solution to an optimization
problem; that is, a problem that requires a body of techniques to
determine a specific solution in a defined set of possible
alternatives that best satisfies a selected object function or set
of constraints. Thus, this highly mathematical book investigates
situations concerning large numbers, in which the assumptions of
the classical linear model fail. Economists, of course, face these
situations often.
This volume systematically details both the basic principles and new developments in Data Envelopment Analysis (DEA), offering a solid understanding of the methodology, its uses, and its potential. New material in this edition includes coverage of recent developments that have greatly extended the power and scope of DEA and have lead to new directions for research and DEA uses. Each chapter accompanies its developments with simple numerical examples and discussions of actual applications. The first nine chapters cover the basic principles of DEA, while the final seven chapters provide a more advanced treatment.
Many economic theories depend on the presence or absence of a unit root for their validity, and econometric and statistical theory undergo considerable changes when unit roots are present. Thus, knowledge on unit roots has become so important, necessitating an extensive, compact, and nontechnical book on this subject. This book is rested on this motivation and introduces the literature on unit roots in a comprehensive manner to both empirical and theoretical researchers in economics and other areas. By providing a clear, complete, and critical discussion of unit root literature, In Choi covers a wide range of topics, including uniform confidence interval construction, unit root tests allowing structural breaks, mildly explosive processes, exuberance testing, fractionally integrated processes, seasonal unit roots and panel unit root testing. Extensive, up to date, and readily accessible, this book is a comprehensive reference source on unit roots for both students and applied workers.
Measurement in Economics: a Handbook aims to serve as a source,
reference, and teaching supplement for quantitative empirical
economics, inside and outside the laboratory. Covering an extensive
range of fields in economics: econometrics, actuarial science,
experimental economics, index theory, national accounts, and
economic forecasting, it is the first book that takes measurement
in economics as its central focus. It shows how different and
sometimes distinct fields share the same kind of measurement
problems and so how the treatment of these problems in one field
can function as a guidance in other fields. This volume provides
comprehensive and up-to-date surveys of recent developments in
economic measurement, written at a level intended for professional
use by economists, econometricians, statisticians and social
scientists.
Putting Econometrics in its Place is an original and fascinating book, in which Peter Swann argues that econometrics has dominated applied economics for far too long and displaced other essential techniques. While Peter Swann is critical of the monopoly that econometrics currently holds in applied economics, the more important and positive contribution of the book is to propose a new direction and a new attitude to applied economics.The advance of econometrics from its early days has been a massive achievement, but it has also been problematic; practical results from the use of econometrics are often disappointing. The author argues that to get applied economics back on course economists must use a much wider variety of research techniques, and must once again learn to respect vernacular knowledge of the economy. This vernacular includes the knowledge gathered by ordinary people from their everyday interactions with markets. While vernacular knowledge is often unsystematic and informal, it offers insights that can never be found from formal analysis alone. As a serious, original and sometimes contentious book, its readership will be varied and international. Scholars throughout the many fields of economics - both skilled and unskilled in econometrics - are likely to be intrigued by the serious alternative approaches outlined within the book. It will also appeal to communities of economists outside economics departments in government, industry and business as well as business and management schools. Research centres for applied economics, policy research and innovation research, will also find it of interest due to its focus on getting reliable results rather than methodological orthodoxy for its own sake.
This introductory textbook for business statistics teaches statistical analysis and research methods via business case studies and financial data using Excel, Minitab, and SAS. Every chapter in this textbook engages the reader with data of individual stock, stock indices, options, and futures. One studies and uses statistics to learn how to study, analyze, and understand a data set of particular interest. Some of the more popular statistical programs that have been developed to use statistical and computational methods to analyze data sets are SAS, SPSS, and Minitab. Of those, we look at Minitab and SAS in this textbook. One of the main reasons to use Minitab is that it is the easiest to use among the popular statistical programs. We look at SAS because it is the leading statistical package used in industry. We also utilize the much less costly and ubiquitous Microsoft Excel to do statistical analysis, as the benefits of Excel have become widely recognized in the academic world and its analytical capabilities extend to about 90 percent of statistical analysis done in the business world. We demonstrate much of our statistical analysis using Excel and double check the analysis and outcomes using Minitab and SAS-also helpful in some analytical methods not possible or practical to do in Excel.
Three leading experts have produced a landmark work based on a set of working papers published by the Center for Operations Research and Econometrics (CORE) at the Universite Catholique de Louvain in 1994 under the title 'Repeated Games', which holds almost mythic status among game theorists. Jean-Francois Mertens, Sylvain Sorin and Shmuel Zamir have significantly elevated the clarity and depth of presentation with many results presented at a level of generality that goes far beyond the original papers - many written by the authors themselves. Numerous results are new, and many classic results and examples are not to be found elsewhere. Most remain state of the art in the literature. This book is full of challenging and important problems that are set up as exercises, with detailed hints provided for their solutions. A new bibliography traces the development of the core concepts up to the present day.
Three leading experts have produced a landmark work based on a set of working papers published by the Center for Operations Research and Econometrics (CORE) at the Universite Catholique de Louvain in 1994 under the title 'Repeated Games', which holds almost mythic status among game theorists. Jean-Francois Mertens, Sylvain Sorin and Shmuel Zamir have significantly elevated the clarity and depth of presentation with many results presented at a level of generality that goes far beyond the original papers - many written by the authors themselves. Numerous results are new, and many classic results and examples are not to be found elsewhere. Most remain state of the art in the literature. This book is full of challenging and important problems that are set up as exercises, with detailed hints provided for their solutions. A new bibliography traces the development of the core concepts up to the present day.
This book discusses the problem of model choice when the statistical models are separate, also called nonnested. Chapter 1 provides an introduction, motivating examples and a general overview of the problem. Chapter 2 presents the classical or frequentist approach to the problem as well as several alternative procedures and their properties. Chapter 3 explores the Bayesian approach, the limitations of the classical Bayes factors and the proposed alternative Bayes factors to overcome these limitations. It also discusses a significance Bayesian procedure. Lastly, Chapter 4 examines the pure likelihood approach. Various real-data examples and computer simulations are provided throughout the text.
The majority of empirical research in economics ignores the potential benefits of nonparametric methods, while the majority of advances in nonparametric theory ignores the problems faced in applied econometrics. This book helps bridge this gap between applied economists and theoretical nonparametric econometricians. It discusses in depth, and in terms that someone with only one year of graduate econometrics can understand, basic to advanced nonparametric methods. The analysis starts with density estimation and motivates the procedures through methods that should be familiar to the reader. It then moves on to kernel regression, estimation with discrete data, and advanced methods such as estimation with panel data and instrumental variables models. The book pays close attention to the issues that arise with programming, computing speed, and application. In each chapter, the methods discussed are applied to actual data, paying attention to presentation of results and potential pitfalls.
Handbook of Field Experiments provides tactics on how to conduct experimental research, also presenting a comprehensive catalog on new results from research and areas that remain to be explored. This updated addition to the series includes an entire chapters on field experiments, the politics and practice of social experiments, the methodology and practice of RCTs, and the econometrics of randomized experiments. These topics apply to a wide variety of fields, from politics, to education, and firm productivity, providing readers with a resource that sheds light on timely issues, such as robustness and external validity. Separating itself from circumscribed debates of specialists, this volume surpasses in usefulness the many journal articles and narrowly-defined books written by practitioners.
This book presents a novel approach to time series econometrics, which studies the behavior of nonlinear stochastic processes. This approach allows for an arbitrary dependence structure in the increments and provides a generalization with respect to the standard linear independent increments assumption of classical time series models. The book offers a solution to the problem of a general semiparametric approach, which is given by a concept called C-convolution (convolution of dependent variables), and the corresponding theory of convolution-based copulas. Intended for econometrics and statistics scholars with a special interest in time series analysis and copula functions (or other nonparametric approaches), the book is also useful for doctoral students with a basic knowledge of copula functions wanting to learn about the latest research developments in the field.
This book treats the latest developments in the theory of order-restricted inference, with special attention to nonparametric methods and algorithmic aspects. Among the topics treated are current status and interval censoring models, competing risk models, and deconvolution. Methods of order restricted inference are used in computing maximum likelihood estimators and developing distribution theory for inverse problems of this type. The authors have been active in developing these tools and present the state of the art and the open problems in the field. The earlier chapters provide an introduction to the subject, while the later chapters are written with graduate students and researchers in mathematical statistics in mind. Each chapter ends with a set of exercises of varying difficulty. The theory is illustrated with the analysis of real-life data, which are mostly medical in nature.
This book provides advanced theoretical and applied tools for the implementation of modern micro-econometric techniques in evidence-based program evaluation for the social sciences. The author presents a comprehensive toolbox for designing rigorous and effective ex-post program evaluation using the statistical software package Stata. For each method, a statistical presentation is developed, followed by a practical estimation of the treatment effects. By using both real and simulated data, readers will become familiar with evaluation techniques, such as regression-adjustment, matching, difference-in-differences, instrumental-variables and regression-discontinuity-design and are given practical guidelines for selecting and applying suitable methods for specific policy contexts.
The series is designed to bring together those mathematicians who are seriously interested in getting new challenging stimuli from economic theories with those economists who are seeking effective mathematical tools for their research. A lot of economic problems can be formulated as constrained optimizations and equilibration of their solutions. Various mathematical theories have been supplying economists with indispensable machineries for these problems arising in economic theory. Conversely, mathematicians have been stimulated by various mathematical difficulties raised by economic theories.
This book discusses the integrated concepts of statistical quality engineering and management tools. It will help readers to understand and apply the concepts of quality through project management and technical analysis, using statistical methods. Prepared in a ready-to-use form, the text will equip practitioners to implement the Six Sigma principles in projects. The concepts discussed are all critically assessed and explained, allowing them to be practically applied in managerial decision-making, and in each chapter, the objectives and connections to the rest of the work are clearly illustrated. To aid in understanding, the book includes a wealth of tables, graphs, descriptions and checklists, as well as charts and plots, worked-out examples and exercises. Perhaps the most unique feature of the book is its approach, using statistical tools, to explain the science behind Six Sigma project management and integrated in engineering concepts. The material on quality engineering and statistical management tools offers valuable support for undergraduate, postgraduate and research students. The book can also serve as a concise guide for Six Sigma professionals, Green Belt, Black Belt and Master Black Belt trainers.
This book presents the latest advances in the theory and practice of Marshall-Olkin distributions. These distributions have been increasingly applied in statistical practice in recent years, as they make it possible to describe interesting features of stochastic models like non-exchangeability, tail dependencies and the presence of a singular component. The book presents cutting-edge contributions in this research area, with a particular emphasis on financial and economic applications. It is recommended for researchers working in applied probability and statistics, as well as for practitioners interested in the use of stochastic models in economics. This volume collects selected contributions from the conference “Marshall-Olkin Distributions: Advances in Theory and Applications,” held in Bologna on October 2-3, 2013. |
You may like...
Statistics for Management - Pearson New…
Richard Levin, David Rubin
Paperback
R2,586
Discovery Miles 25 860
Statistics for Business and Economics…
Paul Newbold, William Carlson, …
R2,178
Discovery Miles 21 780
Linear and Non-Linear Financial…
Mehmet Kenan Terzioglu, Gordana Djurovic
Hardcover
Operations and Supply Chain Management
James Evans, David Collier
Hardcover
Introductory Econometrics - A Modern…
Jeffrey Wooldridge
Hardcover
|