![]() |
![]() |
Your cart is empty |
||
Books > Business & Economics > Economics > Econometrics
This book contains an extensive up-to-date overview of nonlinear
time series models and their application to modelling economic
relationships. It considers nonlinear models in stationary and
nonstationary frameworks, and both parametric and nonparametric
models are discussed. The book contains examples of nonlinear
models in economic theory and presents the most common nonlinear
time series models. Importantly, it shows the reader how to apply
these models in practice. For this purpose, the building of various
nonlinear models with its three stages of model building:
specification, estimation and evaluation, is discussed in detail
and is illustrated by several examples involving both economic and
non-economic data. Since estimation of nonlinear time series models
is carried out using numerical algorithms, the book contains a
chapter on estimating parametric nonlinear models and another on
estimating nonparametric ones.
This book offers a series of statistical tests to determine if the "crowd out" problem, known to hinder the effectiveness of Keynesian economic stimulus programs, can be overcome by monetary programs. It concludes there are programs that can do this, specifically "accommodative monetary policy." They were not used to any great extent prior to the Quantitative Easing program in 2008, causing the failure of many fiscal stimulus programs through no fault of their own. The book includes exhaustive statistical tests to prove this point. There is also a policy analysis section of the book. It examines how effectively the Federal Reserve's anti-crowd out programs have actually worked, to the extent they were undertaken at all. It finds statistical evidence that using commercial and savings banks instead of investment banks when implementing accommodating monetary policy would have markedly improved their effectiveness. This volume, with its companion volume Why Fiscal Stimulus Programs Fail, Volume 2: Statistical Tests Comparing Monetary Policy to Growth, provides 1000 separate statistical tests on the US economy to prove these assertions.
We live in a time of economic virtualism, whereby our lives are
made to conform to the virtual reality of economic thought.
Globalization, transnational capitalism, structural adjustment
programmes and the decay of welfare are all signs of the growing
power of economics, one of the most potent forces of recent
decades. In the last thirty years, economics has ceased to be just
an academic discipline concerned with the study of economy, and has
come to be the only legitimate way to think about all aspects of
society and how we order our lives. Economic models are no longer
measured against the world they seek to describe, but instead the
world is measured against them, found wanting and made to conform.
Agricultural Statistics is published each year to meet the diverse need for a reliable reference book on agricultural production, supplies, consumption, facilities, costs, and returns. Its tables of annual data cover a wide variety of facts in forms suited to most common use. Statistics presented in many of the tables represent actual counts of the items covered. Most of the statistics relating to foreign trade and to government programs, such as numbers and amounts of loans made to farmers, and amounts of loans made by the Commodity Credit Corporation, etc., are data of this type. A large number of other tables, however, contain data that are estimates made by the Department of Agriculture. The estimates for crops, livestock, and poultry made by the U.S. Department of Agriculture are prepared mainly to give timely current state and national totals and averages. They are based on data obtained by sample surveys of farmers and of people who do business with farmers. The survey data are supplemented by information from the Censuses of Agriculture taken every five years and check data from various sources. Being estimates, they are subject to revision as more data become available from commercial or government sources. Unless otherwise indicated, the totals for the United States shown in the various tables on area, production, numbers, price, value, supplies, and disposition are based on official Department estimates. They exclude states for which no official estimates are compiled. Extensive table data include statistics of the following: *Statistics of Grain and Feed *Cotton, Tobacco, Sugar Crops, and Honey *Oilseeds, Fats, and Oils *Vegetables and Melons *Hay, Seeds, and Minor Field Crops *Cattle, Hogs, and Sheep *Dairy and Poultry *Insurance, Credit & Cooperatives *Agricultural Conservation & Forestry *Consumption & Family Living *Fertilizers & Pesticides Miscellaneous Agricultural Statistics such as Foreign Agricultural Trade Statistics including exports, fisheries and more. Professionals in the following fields to include farmers, ranchers, soil conservationists, surveyors, agricultural economist consultants, livestock manufacturers, livestock feedlot operators, food distributors, animal scientists, food chemists, food brokers, farm and land appraisers (and more) may have the greatest interest in this volume.
The State and Metropolitan Area Data Book is the continuation of the U.S. Census Bureau's discontinued publication. It is a convenient summary of statistics on the social and economic structure of the states, metropolitan areas, and micropolitan areas in the United States. It is designed to serve as a statistical reference and guide to other data publications and sources. This new edition features more than 1,500 data items from a variety of sources. It covers many key topical areas including population, birth and death rates, health coverage, school enrollment, crime rates, income and housing, employment, transportation, and government. The metropolitan area information is based on the latest set of definitions of metropolitan and micropolitan areas including: a complete listing and data for all states, metropolitan areas, including micropolitan areas, and their component counties 2010 census counts and more recent population estimates for all areas results of the 2016 national and state elections expanded vital statistics, communication, and criminal justice data data on migration and commuting habits American Community Survey 1- and 3-year estimates data on health insurance and housing and finance matters accurate and helpful citations to allow the user to directly consult the source source notes and explanations A guide to state statistical abstracts and state information Economic development officials, regional planners, urban researchers, college students, and data users can easily see the trends and changes affecting the nation today.
Drawing on the author's extensive and varied research, this book provides readers with a firm grounding in the concepts and issues across several disciplines including economics, nutrition, psychology and public health in the hope of improving the design of food policies in the developed and developing world. Using longitudinal (panel) data from India, Bangladesh, Kenya, the Philippines, Vietnam, and Pakistan and extending the analytical framework used in economics and biomedical sciences to include multi-disciplinary analyses, Alok Bhargava shows how rigorous and thoughtful econometric and statistical analysis can improve our understanding of the relationships between a number of socioeconomic, nutritional, and behavioural variables on a number of issues like cognitive development in children and labour productivity in the developing world. These unique insights combined with a multi-disciplinary approach forge the way for a more refined and effective approach to food policy formation going forward. A chapter on the growing obesity epidemic is also included, highlighting the new set of problems facing not only developed but developing countries. The book also includes a glossary of technical terms to assist readers coming from a variety of disciplines.
This book surveys the state-of-the-art in efficiency and productivity analysis, examining advances in the analytical foundations and empirical applications. The analytical techniques developed in this book for efficiency provide alternative ways of defining optimum outcome sets, typically as a (technical) production frontier or as an (economic) cost, revenue or profit frontier, and alternative ways of measuring efficiency relative to an appropriate frontier. Simultaneously, the analytical techniques developed for efficiency analysis extend directly to productivity analysis, thereby providing alternative methods for estimating productivity levels, and productivity change through time or productivity variation across producers. This book includes chapters using data envelopment analysis (DEA) or stochastic frontier analysis (SFA) as quantitative techniques capable of measuring efficiency and productivity. Across the book's 15 chapters, it broadly extends into popular application areas including agriculture, banking and finance, and municipal performance, and relatively new application areas including corporate social responsibility, the value of intangible assets, land consolidation, and the measurement of economic well-being. The chapters also cover topics such as permutation tests for production frontier shifts, new indices of total factor productivity, and also randomized controlled trials and production frontiers.
Microbehavioral Econometric Methods and Environmental Studies uses microeconometric methods to model the behavior of individuals, then demonstrates the modelling approaches in addressing policy needs. It links theory and methods with applications, and it incorporates data to connect individual choices and global environmental issues. This extension of traditional environmental economics presents modeling strategies and methodological techniques, then applies them to hands-on examples.Throughout the book, readers can access chapter summaries, problem sets, multiple household survey data with regard to agricultural and natural resources in Sub-Saharan Africa, South America, and India, and empirical results and solutions from the SAS software.
Gary Madden was a renaissance man with respect to the nexus between information and communications technology (ICT) and economics. He contributed to a variety of fields in ICT: applied econometrics, forecasting, internet governance and policy. This series of essays, two of which were co-authored by Professor Madden prior to his untimely death, cover the range of his research interests. While the essays focus on a number of ICT issues, they are on the frontier of research in the sector. Gerard Faulhaber provides a broad overview of how we have reached the digital age and its implications. The applied econometric section brings the latest research in the area, for example Lester Taylor illustrates how own-price, cross-price and income elasticities can be calculated from survey data and translated into real income effects. The forecasting section ranges from forecasting online political participation to broadband's impact on economic growth. The final section covers aspects of governance and regulation of the ICT sector.
Statistical Programming in SAS Second Edition provides a foundation for programming to implement statistical solutions using SAS, a system that has been used to solve data analytic problems for more than 40 years. The author includes motivating examples to inspire readers to generate programming solutions. Upper-level undergraduates, beginning graduate students, and professionals involved in generating programming solutions for data-analytic problems will benefit from this book. The ideal background for a reader is some background in regression modeling and introductory experience with computer programming. The coverage of statistical programming in the second edition includes Getting data into the SAS system, engineering new features, and formatting variables Writing readable and well-documented code Structuring, implementing, and debugging programs that are well documented Creating solutions to novel problems Combining data sources, extracting parts of data sets, and reshaping data sets as needed for other analyses Generating general solutions using macros Customizing output Producing insight-inspiring data visualizations Parsing, processing, and analyzing text Programming solutions using matrices and connecting to R Processing text Programming with matrices Connecting SAS with R Covering topics that are part of both base and certification exams.
Plenty of literature review and applications of various tests provided to cover all the aspects of research methodology Various examination questions have been provided Strong Pedagogy along with regular features such as Concept Checks, Text Overviews, Key Terms, Review Questions, Exercises and References Though the book is primarily addressed to students,it will be equally useful to Researchers and Entrepreneurs More than other research textbooks, this book addresses the students' need to comprehend all aspects of the research process which includes Research process, clarification of the research problem, Ethical issues, Survey research, Research report preparation and presentation.
Time Series: A First Course with Bootstrap Starter provides an introductory course on time series analysis that satisfies the triptych of (i) mathematical completeness, (ii) computational illustration and implementation, and (iii) conciseness and accessibility to upper-level undergraduate and M.S. students. Basic theoretical results are presented in a mathematically convincing way, and the methods of data analysis are developed through examples and exercises parsed in R. A student with a basic course in mathematical statistics will learn both how to analyze time series and how to interpret the results. The book provides the foundation of time series methods, including linear filters and a geometric approach to prediction. The important paradigm of ARMA models is studied in-depth, as well as frequency domain methods. Entropy and other information theoretic notions are introduced, with applications to time series modeling. The second half of the book focuses on statistical inference, the fitting of time series models, as well as computational facets of forecasting. Many time series of interest are nonlinear in which case classical inference methods can fail, but bootstrap methods may come to the rescue. Distinctive features of the book are the emphasis on geometric notions and the frequency domain, the discussion of entropy maximization, and a thorough treatment of recent computer-intensive methods for time series such as subsampling and the bootstrap. There are more than 600 exercises, half of which involve R coding and/or data analysis. Supplements include a website with 12 key data sets and all R code for the book's examples, as well as the solutions to exercises.
Now in its third edition, Essential Econometric Techniques: A Guide to Concepts and Applications is a concise, student-friendly textbook which provides an introductory grounding in econometrics, with an emphasis on the proper application and interpretation of results. Drawing on the author's extensive teaching experience, this book offers intuitive explanations of concepts such as heteroskedasticity and serial correlation, and provides step-by-step overviews of each key topic. This new edition contains more applications, brings in new material including a dedicated chapter on panel data techniques, and moves the theoretical proofs to appendices. After Chapter 7, students will be able to design and conduct rudimentary econometric research. The next chapters cover multicollinearity, heteroskedasticity, and autocorrelation, followed by techniques for time-series analysis and panel data. Excel data sets for the end-of-chapter problems are available as a digital supplement. A solutions manual is also available for instructors, as well as PowerPoint slides for each chapter. Essential Econometric Techniques shows students how economic hypotheses can be questioned and tested using real-world data, and is the ideal supplementary text for all introductory econometrics courses.
In this book, different quantitative approaches to the study of electoral systems have been developed: game-theoretic, decision-theoretic, statistical, probabilistic, combinatorial, geometric, and optimization ones. All the authors are prominent scholars from these disciplines. Quantitative approaches offer a powerful tool to detect inconsistencies or poor performance in actual systems. Applications to concrete settings such as EU, American Congress, regional, and committee voting are discussed.
This second edition of Design of Observational Studies is both an introduction to statistical inference in observational studies and a detailed discussion of the principles that guide the design of observational studies. An observational study is an empiric investigation of effects caused by treatments when randomized experimentation is unethical or infeasible. Observational studies are common in most fields that study the effects of treatments on people, including medicine, economics, epidemiology, education, psychology, political science and sociology. The quality and strength of evidence provided by an observational study is determined largely by its design. Design of Observational Studies is organized into five parts. Chapters 2, 3, and 5 of Part I cover concisely many of the ideas discussed in Rosenbaum's Observational Studies (also published by Springer) but in a less technical fashion. Part II discusses the practical aspects of using propensity scores and other tools to create a matched comparison that balances many covariates, and includes an updated chapter on matching in R. In Part III, the concept of design sensitivity is used to appraise the relative ability of competing designs to distinguish treatment effects from biases due to unmeasured covariates. Part IV is new to this edition; it discusses evidence factors and the computerized construction of more than one comparison group. Part V discusses planning the analysis of an observational study, with particular reference to Sir Ronald Fisher's striking advice for observational studies: "make your theories elaborate." This new edition features updated exploration of causal influence, with four new chapters, a new R package DOS2 designed as a companion for the book, and discussion of several of the latest matching packages for R. In particular, DOS2 allows readers to reproduce many analyses from Design of Observational Studies.
The contents of this volume comprise the proceedings of the conference, "Equilibrium theory and applications." Some of the recent developments in general equilibrium theory in the perspective of actual and potential applications are presented. The conference was organized in honor of Jacques Drèze on the occasion of his sixtieth birthday. Held at C.O.R.E., it was also the unanimous recognition, stressed by Gérard Debreu in his Address, of his role as "the architect and builder" of the Center for Operations Research and Econometrics. An introductory address by Gérard Debreu comprises Part 1 of the volume. The rest of the volume is divided into four parts spanning the scope of the conference. Part 2 is on incomplete markets, increasing returns, and information, Part 3 on equilibrium and dynamices, Part 4 on employment, imperfect competition, and macroeconomics, and Part 5 on applied general equilibrium models.
This book is an introduction to regression analysis, focusing on the practicalities of doing regression analysis on real-life data. Contrary to other textbooks on regression, this book is based on the idea that you do not necessarily need to know much about statistics and mathematics to get a firm grip on regression and perform it to perfection. This non-technical point of departure is complemented by practical examples of real-life data analysis using statistics software such as Stata, R and SPSS. Parts 1 and 2 of the book cover the basics, such as simple linear regression, multiple linear regression, how to interpret the output from statistics programs, significance testing and the key regression assumptions. Part 3 deals with how to practically handle violations of the classical linear regression assumptions, regression modeling for categorical y-variables and instrumental variable (IV) regression. Part 4 puts the various purposes of, or motivations for, regression into the wider context of writing a scholarly report and points to some extensions to related statistical techniques. This book is written primarily for those who need to do regression analysis in practice, and not only to understand how this method works in theory. The book's accessible approach is recommended for students from across the social sciences.
This is the first of two volumes containing papers and commentaries presented at the Eleventh World Congress of the Econometric Society, held in Montreal, Canada in August 2015. These papers provide state-of-the-art guides to the most important recent research in economics. The book includes surveys and interpretations of key developments in economics and econometrics, and discussion of future directions for a wide variety of topics, covering both theory and application. These volumes provide a unique, accessible survey of progress on the discipline, written by leading specialists in their fields. The first volume includes theoretical and applied papers addressing topics such as dynamic mechanism design, agency problems, and networks.
This compendium contains and explains essential statistical formulas within an economic context. A broad range of aids and supportive examples will help readers to understand the formulas and their practical applications. This statistical formulary is presented in a practice-oriented, clear, and understandable manner, as it is needed for meaningful and relevant application in global business, as well as in the academic setting and economic practice. The topics presented include, but are not limited to: statistical signs and symbols, descriptive statistics, empirical distributions, ratios and index figures, correlation analysis, regression analysis, inferential statistics, probability calculation, probability distributions, theoretical distributions, statistical estimation methods, confidence intervals, statistical testing methods, the Peren-Clement index, and the usual statistical tables. Given its scope, the book offers an indispensable reference guide and is a must-read for undergraduate and graduate students, as well as managers, scholars, and lecturers in business, politics, and economics.
Applied Time Series Modelling and Forecasting provides a relatively non-technical introduction to applied time series econometrics and forecasting involving non-stationary data. The emphasis is very much on the why and how and, as much as possible, the authors confine technical material to boxes or point to the relevant sources for more detailed information. This book is based on an earlier title Using Cointegration Analysis in Econometric Modelling by Richard Harris. As well as updating material covered in the earlier book, there are two major additions involving panel tests for unit roots and cointegration and forecasting of financial time series. Harris and Sollis have also incorporated as many of the latest techniques in the area as possible including: testing for periodic integration and cointegration; GLS detrending when testing for unit roots; structural breaks and season unit root testing; testing for cointegration with a structural break; asymmetric tests for cointegration; testing for super-exogeniety; seasonal cointegration in multivariate models; and approaches to structural macroeconomic modelling. In addition, the discussion of certain topics, such as testing for unique vectors, has been simplified. Applied Time Series Modelling and Forecasting has been written for students taking courses in financial economics and forecasting, applied time series, and econometrics at advanced undergraduate and postgraduate levels. It will also be useful for practitioners who wish to understand the application of time series modelling e.g. financial brokers. Data sets and econometric code for implementing some of the more recent procedures covered in the book can be found on the following web site www.wiley.co.uk/harris
This introductory overview explores the methods, models and interdisciplinary links of artificial economics, a new way of doing economics in which the interactions of artificial economic agents are computationally simulated to study their individual and group behavior patterns. Conceptually and intuitively, and with simple examples, Mercado addresses the differences between the basic assumptions and methods of artificial economics and those of mainstream economics. He goes on to explore various disciplines from which the concepts and methods of artificial economics originate; for example cognitive science, neuroscience, artificial intelligence, evolutionary science and complexity science. Introductory discussions on several controversial issues are offered, such as the application of the concepts of evolution and complexity in economics and the relationship between artificial intelligence and the philosophies of mind. This is one of the first books to fully address artificial economics, emphasizing its interdisciplinary links and presenting in a balanced way its occasionally controversial aspects.
This selection of Professor Dhrymes's major papers combines important contributions to econometric theory with a series of well-thought-out, skilfully-executed empirical studies. The theoretical papers focus on such issues as the general linear model, simultaneous equations models, distributed lags and ancillary topics. Most of these papers originated with problems encountered in empirical research. The applied studies deal with production function and productivity topics, demand for labour, arbitrage pricing theory, demand for housing and related issues. Featuring careful exposition of key techniques combined with relevant theory and illustrations of possible applications, this book will be welcomed by academic and professional economists concerned with the use of econometric techniques and their underlying theory.
Advanced and Multivariate Statistical Methods, Seventh Edition provides conceptual and practical information regarding multivariate statistical techniques to students who do not necessarily need technical and/or mathematical expertise in these methods. This text has three main purposes. The first purpose is to facilitate conceptual understanding of multivariate statistical methods by limiting the technical nature of the discussion of those concepts and focusing on their practical applications. The second purpose is to provide students with the skills necessary to interpret research articles that have employed multivariate statistical techniques. Finally, the third purpose of AMSM is to prepare graduate students to apply multivariate statistical methods to the analysis of their own quantitative data or that of their institutions. New to the Seventh Edition All references to SPSS have been updated to Version 27.0 of the software. A brief discussion of practical significance has been added to Chapter 1. New data sets have now been incorporated into the book and are used extensively in the SPSS examples. All the SPSS data sets utilized in this edition are available for download via the companion website. Additional resources on this site include several video tutorials/walk-throughs of the SPSS procedures. These "how-to" videos run approximately 5-10 minutes in length. Advanced and Multivariate Statistical Methods was written for use by students taking a multivariate statistics course as part of a graduate degree program, for example in psychology, education, sociology, criminal justice, social work, mass communication, and nursing.
Contemporary economists, when analyzing economic behavior of people, need to use the diversity of research methods and modern ways of discovering knowledge. The increasing popularity of using economic experiments requires the use of IT tools and quantitative methods that facilitate the analysis of the research material obtained as a result of the experiments and the formulation of correct conclusions. This proceedings volume presents problems in contemporary economics and provides innovative solutions using a range of quantitative and experimental tools. Featuring selected contributions presented at the 2018 Computational Methods in Experimental Economics Conference (CMEE 2018), this book provides a modern economic perspective on such important issues as: sustainable development, consumption, production, national wealth, the silver economy, behavioral finance, economic and non-economic factors determining the behavior of household members, consumer preferences, social campaigns, and neuromarketing. International case studies are also offered. |
![]() ![]() You may like...
Stability Analysis of Neural Networks
Grienggrai Rajchakit, Praveen Agarwal, …
Hardcover
R3,681
Discovery Miles 36 810
Dynamics of Gambling: Origins of…
Jaroslaw Strzalko, Juliusz Grabski, …
Hardcover
R1,520
Discovery Miles 15 200
Mathematical Topics on Modelling Complex…
J.A. Tenreiro Machado, Dimitri Volchenkov
Hardcover
R2,861
Discovery Miles 28 610
Reference for Modern Instrumentation…
R.N. Thurston, Allan D. Pierce
Hardcover
R4,342
Discovery Miles 43 420
Modelling, Analysis, and Control of…
Ziyang Meng, Tao Yang, …
Hardcover
R3,117
Discovery Miles 31 170
Modelling, Estimation and Control of…
Alessandro Chiuso, Luigi Fortuna, …
Hardcover
R4,493
Discovery Miles 44 930
|