![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics
Doing Statistical Analysis looks at three kinds of statistical research questions - descriptive, associational, and inferential - and shows students how to conduct statistical analyses and interpret the results. Keeping equations to a minimum, it uses a conversational style and relatable examples such as football, COVID-19, and tourism, to aid understanding. Each chapter contains practice exercises, and a section showing students how to reproduce the statistical results in the book using Stata and SPSS. Digital supplements consist of data sets in Stata, SPSS, and Excel, and a test bank for instructors. Its accessible approach means this is the ideal textbook for undergraduate students across the social and behavioral sciences needing to build their confidence with statistical analysis.
This volume collects seven of Marc Nerlove's previously published, classic essays on panel data econometrics written over the past thirty-five years, together with a cogent essay on the history of the subject, which began with George Biddell Airey's monograph published in 1861. Since Professor Nerlove's 1966 Econometrica paper with Pietro Balestra, panel data and methods of econometric analysis appropriate to such data have become increasingly important in the discipline. The principal factors in the research environment affecting the future course of panel data econometrics are the phenomenal growth in the computational power available to the individual researcher at his or her desktop and the ready availability of data sets, both large and small, via the Internet. The best way to formulate statistical models for inference is motivated and shaped by substantive problems and understanding of the processes generating the data at hand to resolve them. The essays illustrate both the role of the substantive context in shaping appropriate methods of inference and the increasing importance of computer-intensive methods.
This study examines the determinants of current account, export market share and exchange rates. The author identifies key determinants using Bayesian Model Averaging, which allows evaluation of probability that each variable is in fact a determinant of the analysed competitiveness measure. The main implication of the results presented in the study is that increasing international competitiveness is a gradual process that requires institutional and technological changes rather than short-term adjustments in relative prices.
This book addresses the functioning of financial markets, in particular the financial market model, and modelling. More specifically, the book provides a model of adaptive preference in the financial market, rather than the model of the adaptive financial market, which is mostly based on Popper's objective propensity for the singular, i.e., unrepeatable, event. As a result, the concept of preference, following Simon's theory of satisficing, is developed in a logical way with the goal of supplying a foundation for a robust theory of adaptive preference in financial market behavior. The book offers new insights into financial market logic, and psychology: 1) advocating for the priority of behavior over information - in opposition to traditional financial market theories; 2) constructing the processes of (co)evolution adaptive preference-financial market using the concept of fetal reaction norms - between financial market and adaptive preference; 3) presenting a new typology of information in the financial market, aimed at proving point (1) above, as well as edifying an explicative mechanism of the evolutionary nature and behavior of the (real) financial market; 4) presenting sufficient, and necessary, principles or assumptions for developing a theory of adaptive preference in the financial market; and 5) proposing a new interpretation of the pair genotype-phenotype in the financial market model. The book's distinguishing feature is its research method, which is mainly logically rather than historically or empirically based. As a result, the book is targeted at generating debate about the best and most scientifically beneficial method of approaching, analyzing, and modelling financial markets.
This book has two components: stochastic dynamics and stochastic random combinatorial analysis. The first discusses evolving patterns of interactions of a large but finite number of agents of several types. Changes of agent types or their choices or decisions over time are formulated as jump Markov processes with suitably specified transition rates: optimisations by agents make these rates generally endogenous. Probabilistic equilibrium selection rules are also discussed, together with the distributions of relative sizes of the bases of attraction. As the number of agents approaches infinity, we recover deterministic macroeconomic relations of more conventional economic models. The second component analyses how agents form clusters of various sizes. This has applications for discussing sizes or shares of markets by various agents which involve some combinatorial analysis patterned after the population genetics literature. These are shown to be relevant to distributions of returns to assets, volatility of returns, and power laws.
Today econometrics has been widely applied in the empirical study of economics. As an empirical science, econometrics uses rigorous mathematical and statistical methods for economic problems. Understanding the methodologies of both econometrics and statistics is a crucial departure for econometrics. The primary focus of this book is to provide an understanding of statistical properties behind econometric methods. Following the introduction in Chapter 1, Chapter 2 provides the methodological review of both econometrics and statistics in different periods since the 1930s. Chapters 3 and 4 explain the underlying theoretical methodologies for estimated equations in the simple regression and multiple regression models and discuss the debates about p-values in particular. This part of the book offers the reader a richer understanding of the methods of statistics behind the methodology of econometrics. Chapters 5-9 of the book are focused on the discussion of regression models using time series data, traditional causal econometric models, and the latest statistical techniques. By concentrating on dynamic structural linear models like state-space models and the Bayesian approach, the book alludes to the fact that this methodological study is not only a science but also an art. This work serves as a handy reference book for anyone interested in econometrics, particularly in relevance to students and academic and business researchers in all quantitative analysis fields.
The Analytic Network Process (ANP), developed by Thomas Saaty in his work on multicriteria decision making, applies network structures with dependence and feedback to complex decision making. This new edition of Decision Making with the Analytic Network Process is a selection of the latest applications of ANP to economic, social and political decisions, and also to technological design. The ANP is a methodological tool that is helpful to organize knowledge and thinking, elicit judgments registered in both in memory and in feelings, quantify the judgments and derive priorities from them, and finally synthesize these diverse priorities into a single mathematically and logically justifiable overall outcome. In the process of deriving this outcome, the ANP also allows for the representation and synthesis of diverse opinions in the midst of discussion and debate. The book focuses on the application of the ANP in three different areas: economics, the social sciences and the linking of measurement with human values. Economists can use the ANP for an alternate approach for dealing with economic problems than the usual mathematical models on which economics bases its quantitative thinking. For psychologists, sociologists and political scientists, the ANP offers the methodology they have sought for some time to quantify and derive measurements for intangibles. Finally the book applies the ANP to provide people in the physical and engineering sciences with a quantitative method to link hard measurement to human values. In such a process, one is able to interpret the true meaning of measurements made on a uniform scale using a unit.
Explains modern SDC techniques for data stewards and develop tools to implement them. Explains the logic behind modern privacy protections for researchers and how they may use publicly released data to generate valid statistical inferences-as well as the limitations imposed by SDC techniques.
The conference, 'Measurement Error: Econometrics and Practice' was recently hosted by Aston University and organised jointly by researchers from Aston University and Lund University to highlight the enormous problems caused by measurement error in Economic and Financial data which often go largely unnoticed. Thanks to the sponsorship from Eurostat, a number of distinguished researchers were invited to present keynote lectures. Professor Arnold Zellner from University of Chicago shared his knowledge on measurement error in general; Professor William Barnett from the University of Kansas gave a lecture on implications of measurement error on monetary policy, whilst Dennis Fixler shared his knowledge on how statistical agencies deal with measurement errors. This volume is the result of the selection of high-quality papers presented at the conference and is designed to draw attention to the enormous problem in econometrics of measurement error in data provided by the worlds leading statistical agencies; highlighting consequences of data error and offering solutions to deal with such problems. This volume should appeal to economists, financial analysts and practitioners interested in studying and solving economic problems and building econometric models in everyday operations.
This volume, edited by Jeffrey Racine, Liangjun Su, and Aman Ullah, contains the latest research on nonparametric and semiparametric econometrics and statistics. These data-driven models seek to replace the "classical " parametric models of the past, which were rigid and often linear. Chapters by leading international econometricians and statisticians highlight the interface between econometrics and statistical methods for nonparametric and semiparametric procedures. They provide a balanced view of new developments in the analysis and modeling of applied sciences with cross-section, time series, panel, and spatial data sets. The major topics of the volume include: the methodology of semiparametric models and special regressor methods; inverse, ill-posed, and well-posed problems; different methodologies related to additive models; sieve regression estimators, nonparametric and semiparametric regression models, and the true error of competing approximate models; support vector machines and their modeling of default probability; series estimation of stochastic processes and some of their applications in Econometrics; identification, estimation, and specification problems in a class of semilinear time series models; nonparametric and semiparametric techniques applied to nonstationary or near nonstationary variables; the estimation of a set of regression equations; and a new approach to the analysis of nonparametric models with exogenous treatment assignment.
The book provides an integrated approach to risk sharing, risk spreading and efficient regulation through principal agent models. It emphasizes the role of information asymmetry and risk sharing in contracts as an alternative to transaction cost considerations. It examines how contracting, as an institutional mechanism to conduct transactions, spreads risks while attempting consolidation. It further highlights the shifting emphasis in contracts from Coasian transaction cost saving to risk sharing and shows how it creates difficulties associated with risk spreading, and emphasizes the need for efficient regulation of contracts at various levels. Each of the chapters is structured using a principal agent model, and all chapters incorporate adverse selection (and exogenous randomness) as a result of information asymmetry, as well as moral hazard (and endogenous randomness) due to the self-interest-seeking behavior on the part of the participants.
Now in its third edition, Essential Econometric Techniques: A Guide to Concepts and Applications is a concise, student-friendly textbook which provides an introductory grounding in econometrics, with an emphasis on the proper application and interpretation of results. Drawing on the author's extensive teaching experience, this book offers intuitive explanations of concepts such as heteroskedasticity and serial correlation, and provides step-by-step overviews of each key topic. This new edition contains more applications, brings in new material including a dedicated chapter on panel data techniques, and moves the theoretical proofs to appendices. After Chapter 7, students will be able to design and conduct rudimentary econometric research. The next chapters cover multicollinearity, heteroskedasticity, and autocorrelation, followed by techniques for time-series analysis and panel data. Excel data sets for the end-of-chapter problems are available as a digital supplement. A solutions manual is also available for instructors, as well as PowerPoint slides for each chapter. Essential Econometric Techniques shows students how economic hypotheses can be questioned and tested using real-world data, and is the ideal supplementary text for all introductory econometrics courses.
A comprehensive account of economic size distributions around the world and throughout the years In the course of the past 100 years, economists and applied statisticians have developed a remarkably diverse variety of income distribution models, yet no single resource convincingly accounts for all of these models, analyzing their strengths and weaknesses, similarities and differences. Statistical Size Distributions in Economics and Actuarial Sciences is the first collection to systematically investigate a wide variety of parametric models that deal with income, wealth, and related notions. Christian Kleiber and Samuel Kotz survey, compliment, compare, and unify all of the disparate models of income distribution, highlighting at times a lack of coordination between them that can result in unnecessary duplication. Considering models from eight languages and all continents, the authors discuss the social and economic implications of each as well as distributions of size of loss in actuarial applications. Specific models covered include:
Three appendices provide brief biographies of some of the leading players along with the basic properties of each of the distributions. Actuaries, economists, market researchers, social scientists, and physicists interested in econophysics will find Statistical Size Distributions in Economics and Actuarial Sciences to be a truly one-of-a-kind addition to the professional literature.
This book reflects the state of the art on nonlinear economic dynamics, financial market modelling and quantitative finance. It contains eighteen papers with topics ranging from disequilibrium macroeconomics, monetary dynamics, monopoly, financial market and limit order market models with boundedly rational heterogeneous agents to estimation, time series modelling and empirical analysis and from risk management of interest-rate products, futures price volatility and American option pricing with stochastic volatility to evaluation of risk and derivatives of electricity market. The book illustrates some of the most recent research tools in these areas and will be of interest to economists working in economic dynamics and financial market modelling, to mathematicians who are interested in applying complexity theory to economics and finance and to market practitioners and researchers in quantitative finance interested in limit order, futures and electricity market modelling, derivative pricing and risk management.
This book is devoted to biased sampling problems (also called choice-based sampling in Econometrics parlance) and over-identified parameter estimation problems. Biased sampling problems appear in many areas of research, including Medicine, Epidemiology and Public Health, the Social Sciences and Economics. The book addresses a range of important topics, including case and control studies, causal inference, missing data problems, meta-analysis, renewal process and length biased sampling problems, capture and recapture problems, case cohort studies, exponential tilting genetic mixture models etc. The goal of this book is to make it easier for Ph. D students and new researchers to get started in this research area. It will be of interest to all those who work in the health, biological, social and physical sciences, as well as those who are interested in survey methodology and other areas of statistical science, among others.
This book primarily addresses the optimality aspects of covariate designs. A covariate model is a combination of ANOVA and regression models. Optimal estimation of the parameters of the model using a suitable choice of designs is of great importance; as such choices allow experimenters to extract maximum information for the unknown model parameters. The main emphasis of this monograph is to start with an assumed covariate model in combination with some standard ANOVA set-ups such as CRD, RBD, BIBD, GDD, BTIBD, BPEBD, cross-over, multi-factor, split-plot and strip-plot designs, treatment control designs, etc. and discuss the nature and availability of optimal covariate designs. In some situations, optimal estimations of both ANOVA and the regression parameters are provided. Global optimality and D-optimality criteria are mainly used in selecting the design. The standard optimality results of both discrete and continuous set-ups have been adapted, and several novel combinatorial techniques have been applied for the construction of optimum designs using Hadamard matrices, the Kronecker product, Rao-Khatri product, mixed orthogonal arrays to name a few.
With the rapidly advancing fields of Data Analytics and Computational Statistics, it's important to keep up with current trends, methodologies, and applications. This book investigates the role of data mining in computational statistics for machine learning. It offers applications that can be used in various domains and examines the role of transformation functions in optimizing problem statements. Data Analytics, Computational Statistics, and Operations Research for Engineers: Methodologies and Applications presents applications of computationally intensive methods, inference techniques, and survival analysis models. It discusses how data mining extracts information and how machine learning improves the computational model based on the new information. Those interested in this reference work will include students, professionals, and researchers working in the areas of data mining, computational statistics, operations research, and machine learning.
The main theme of this volume is credit risk and credit derivatives. Recent developments in financial markets show that appropriate modeling and quantification of credit risk is fundamental in the context of modern complex structured financial products. The reader will find several points of view on credit risk when looked at from the perspective of Econometrics and Financial Mathematics. The volume consists of eleven contributions by both practitioners and theoreticians with expertise in financial markets, in general, and econometrics and mathematical finance in particular. The challenge of modeling defaults and their correlations is addressed, and new results on copula, reduced form and structural models, and the top-down approach are presented. After the so-called subprime crisis that hit global markets in the summer of 2007, the volume is very timely and will be useful to researchers in the area of credit risk.
This book is an ideal introduction for beginning students of econometrics that assumes only basic familiarity with matrix algebra and calculus. It features practical questions which can be answered using econometric methods and models. Focusing on a limited number of the most basic and widely used methods, the book reviews the basics of econometrics before concluding with a number of recent empirical case studies. The volume is an intuitive illustration of what econometricians do when faced with practical questions.
Originally published in 1984. This book examines two important dimensions of efficiency in the foreign exchange market using econometric techniques. It responds to the macroeconomics trend to re-examining the theories of exchange rate determination following the erratic behaviour of exchange rates in the late 1970s. In particular the text looks at the relation between spot and forward exchange rates and the term structure of the forward premium, both of which require a joint test of market efficiency and the equilibrium model. Approaches used are the regression of spot rates on lagged forward rates and an explicit time series analysis of the spot and forward rates, using data from Canada, the United Kingdom, the Netherlands, Switzerland and Germany.
This monograph provides a unified and comprehensive treatment of an order-theoretic fixed point theory in partially ordered sets and its various useful interactions with topological structures. The material progresses systematically, by presenting the preliminaries before moving to more advanced topics. In the treatment of the applications a wide range of mathematical theories and methods from nonlinear analysis and integration theory are applied; an outline of which has been given an appendix chapter to make the book self-contained. Graduate students and researchers in nonlinear analysis, pure and applied mathematics, game theory and mathematical economics will find this book useful.
This book explores Latin American inequality broadly in terms of its impact on the region's development and specifically with two country studies from Peru on earnings inequality and child labor as a consequence of inequality for child labor. The first chapter provides substantial recent undated analysis of the critical thesis of deindustrialization for Latin America. The second chapter provides an approach to measuring labor market discrimination that departs from the current treatment of unobservable influences in the literature. The third chapter examines a much-neglected topic of child labor using a panel data set specifically on children. The book is appropriate for courses on economic development and labor economics and for anyone interested in inequality, development and applied econometrics.
This book addresses both theoretical developments in and practical applications of econometric techniques to finance-related problems. It includes selected edited outcomes of the International Econometric Conference of Vietnam (ECONVN2018), held at Banking University, Ho Chi Minh City, Vietnam on January 15-16, 2018. Econometrics is a branch of economics that uses mathematical (especially statistical) methods to analyze economic systems, to forecast economic and financial dynamics, and to develop strategies for achieving desirable economic performance. An extremely important part of economics is finances: a financial crisis can bring the whole economy to a standstill and, vice versa, a smart financial policy can dramatically boost economic development. It is therefore crucial to be able to apply mathematical techniques of econometrics to financial problems. Such applications are a growing field, with many interesting results - and an even larger number of challenges and open problems.
This book, and its companion volume, present a collection of papers by Clive W.J. Granger. His contributions to economics and econometrics, many of them seminal, span more than four decades and touch on all aspects of time series analysis. The papers assembled in this volume explore topics in spectral analysis, seasonality, nonlinearity, methodology, and forecasting. Those in the companion volume investigate themes in causality, integration and cointegration, and long memory. The two volumes contain the original articles as well as an introduction written by the editors.
This book presents selected peer-reviewed contributions from the International Work-Conference on Time Series, ITISE 2017, held in Granada, Spain, September 18-20, 2017. It discusses topics in time series analysis and forecasting, including advanced mathematical methodology, computational intelligence methods for time series, dimensionality reduction and similarity measures, econometric models, energy time series forecasting, forecasting in real problems, online learning in time series as well as high-dimensional and complex/big data time series. The series of ITISE conferences provides a forum for scientists, engineers, educators and students to discuss the latest ideas and implementations in the foundations, theory, models and applications in the field of time series analysis and forecasting. It focuses on interdisciplinary and multidisciplinary research encompassing computer science, mathematics, statistics and econometrics. |
You may like...
No Retreat, No Surrender - The Inspiring…
Oscar Chalupsky, Graham Spence
Paperback
Pearson REVISE Key Stage 2 SATs English…
Hilary Koll, Steve Mills
Paperback
R211
Discovery Miles 2 110
The How Not To Die Cookbook - Over 100…
Michael Greger
Paperback
(2)
Arnon Avron on Semantics and Proof…
Ofer Arieli, Anna Zamansky
Hardcover
R3,682
Discovery Miles 36 820
|