![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics
This book is an introduction to regression analysis, focusing on the practicalities of doing regression analysis on real-life data. Contrary to other textbooks on regression, this book is based on the idea that you do not necessarily need to know much about statistics and mathematics to get a firm grip on regression and perform it to perfection. This non-technical point of departure is complemented by practical examples of real-life data analysis using statistics software such as Stata, R and SPSS. Parts 1 and 2 of the book cover the basics, such as simple linear regression, multiple linear regression, how to interpret the output from statistics programs, significance testing and the key regression assumptions. Part 3 deals with how to practically handle violations of the classical linear regression assumptions, regression modeling for categorical y-variables and instrumental variable (IV) regression. Part 4 puts the various purposes of, or motivations for, regression into the wider context of writing a scholarly report and points to some extensions to related statistical techniques. This book is written primarily for those who need to do regression analysis in practice, and not only to understand how this method works in theory. The book's accessible approach is recommended for students from across the social sciences.
This book presents the reader with new operators and matrices that arise in the area of matrix calculus. The properties of these mathematical concepts are investigated and linked with zero-one matrices such as the commutation matrix. Elimination and duplication matrices are revisited and partitioned into submatrices. Studying the properties of these submatrices facilitates achieving new results for the original matrices themselves. Different concepts of matrix derivatives are presented and transformation principles linking these concepts are obtained. One of these concepts is used to derive new matrix calculus results, some involving the new operators and others the derivatives of the operators themselves. The last chapter contains applications of matrix calculus, including optimization, differentiation of log-likelihood functions, iterative interpretations of maximum likelihood estimators, and a Lagrangian multiplier test for endogeneity.
This textbook provides a coherent introduction to the main concepts and methods of one-parameter statistical inference. Intended for students of Mathematics taking their first course in Statistics, the focus is on Statistics for Mathematicians rather than on Mathematical Statistics. The goal is not to focus on the mathematical/theoretical aspects of the subject, but rather to provide an introduction to the subject tailored to the mindset and tastes of Mathematics students, who are sometimes turned off by the informal nature of Statistics courses. This book can be used as the basis for an elementary semester-long first course on Statistics with a firm sense of direction that does not sacrifice rigor. The deeper goal of the text is to attract the attention of promising Mathematics students.
With a new author team contributing decades of practical experience, this fully updated and thoroughly classroom-tested second edition textbook prepares students and practitioners to create effective forecasting models and master the techniques of time series analysis. Taking a practical and example-driven approach, this textbook summarises the most critical decisions, techniques and steps involved in creating forecasting models for business and economics. Students are led through the process with an entirely new set of carefully developed theoretical and practical exercises. Chapters examine the key features of economic time series, univariate time series analysis, trends, seasonality, aberrant observations, conditional heteroskedasticity and ARCH models, non-linearity and multivariate time series, making this a complete practical guide. A companion website with downloadable datasets, exercises and lecture slides rounds out the full learning package.
This book assesses how efficient primary and upper primary education is across different states of India considering both output oriented and input oriented measures of technical efficiency. It identifies the most important factors that could produce differential efficiency among the states, including the effects of central grants, school-specific infrastructures, social indicators and policy variables, as well as state-specific factors like per-capita net-state-domestic-product from the service sector, inequality in distribution of income (Gini coefficient), the percentage of people living below the poverty line and the density of population. The study covers the period 2005-06 to 2010-11 and all the states and union territories of India, which are categorized into two separate groups, namely: (i) General Category States (GCS); and (ii) Special Category States (SCS) and Union Territories (UT). It uses non-parametric Data Envelopment Analysis (DEA) and obtains the Technology Closeness Ratio (TCR), measuring whether the maximum output producible from an input bundle by a school within a given group is as high as what could be produced if the school could choose to join the other group. The major departure of this book is its approach to estimating technical efficiency (TE), which does not use a single frontier encompassing all the states and UT, as is done in the available literature. Rather, this method assumes that GCS, SCS and UT are not homogeneous and operate under different fiscal and economic conditions.
Focusing on what actuaries need in practice, this introductory account provides readers with essential tools for handling complex problems and explains how simulation models can be created, used and re-used (with modifications) in related situations. The book begins by outlining the basic tools of modelling and simulation, including a discussion of the Monte Carlo method and its use. Part II deals with general insurance and Part III with life insurance and financial risk. Algorithms that can be implemented on any programming platform are spread throughout and a program library written in R is included. Numerous figures and experiments with R-code illustrate the text. The author's non-technical approach is ideal for graduate students, the only prerequisites being introductory courses in calculus and linear algebra, probability and statistics. The book will also be of value to actuaries and other analysts in the industry looking to update their skills.
This book is a comprehensive introduction of the reader into the simulation and modelling techniques and their application in the management of organisations. The book is rooted in the thorough understanding of systems theory applied to organisations and focuses on how this theory can apply to econometric models used in the management of organisations. The econometric models in this book employ linear and dynamic programming, graph theory, queuing theory, game theory, etc. and are presented and analysed in various fields of application, such as investment management, stock management, strategic decision making, management of production costs and the lifecycle costs of quality and non-quality products, production quality Management, etc.
The case-control approach is a powerful method for investigating factors that may explain a particular event. It is extensively used in epidemiology to study disease incidence, one of the best-known examples being Bradford Hill and Doll's investigation of the possible connection between cigarette smoking and lung cancer. More recently, case-control studies have been increasingly used in other fields, including sociology and econometrics. With a particular focus on statistical analysis, this book is ideal for applied and theoretical statisticians wanting an up-to-date introduction to the field. It covers the fundamentals of case-control study design and analysis as well as more recent developments, including two-stage studies, case-only studies and methods for case-control sampling in time. The latter have important applications in large prospective cohorts which require case-control sampling designs to make efficient use of resources. More theoretical background is provided in an appendix for those new to the field.
This edited book contains several state-of-the-art papers devoted to econometrics of risk. Some papers provide theoretical analysis of the corresponding mathematical, statistical, computational, and economical models. Other papers describe applications of the novel risk-related econometric techniques to real-life economic situations. The book presents new methods developed just recently, in particular, methods using non-Gaussian heavy-tailed distributions, methods using non-Gaussian copulas to properly take into account dependence between different quantities, methods taking into account imprecise ("fuzzy") expert knowledge, and many other innovative techniques. This versatile volume helps practitioners to learn how to apply new techniques of econometrics of risk, and researchers to further improve the existing models and to come up with new ideas on how to best take into account economic risks.
Economic Modeling Using Artificial Intelligence Methods examines the application of artificial intelligence methods to model economic data. Traditionally, economic modeling has been modeled in the linear domain where the principles of superposition are valid. The application of artificial intelligence for economic modeling allows for a flexible multi-order non-linear modeling. In addition, game theory has largely been applied in economic modeling. However, the inherent limitation of game theory when dealing with many player games encourages the use of multi-agent systems for modeling economic phenomena. The artificial intelligence techniques used to model economic data include: multi-layer perceptron neural networks radial basis functions support vector machines rough sets genetic algorithm particle swarm optimization simulated annealing multi-agent system incremental learning fuzzy networks Signal processing techniques are explored to analyze economic data, and these techniques are the time domain methods, time-frequency domain methods and fractals dimension approaches. Interesting economic problems such as causality versus correlation, simulating the stock market, modeling and controling inflation, option pricing, modeling economic growth as well as portfolio optimization are examined. The relationship between economic dependency and interstate conflict is explored, and knowledge on how economics is useful to foster peace - and vice versa - is investigated. Economic Modeling Using Artificial Intelligence Methods deals with the issue of causality in the non-linear domain and applies the automatic relevance determination, the evidence framework, Bayesian approach and Granger causality to understand causality and correlation. Economic Modeling Using Artificial Intelligence Methods makes an important contribution to the area of econometrics, and is a valuable source of reference for graduate students, researchers and financial practitioners.
Though globalisation of the world economy is currently a powerful force, people’s international mobility appears to still be very limited. The goal of this book is to improve our knowledge of the true effects of migration flows. It includes contributions by prominent academic researchers analysing the socio-economic impact of migration in a variety of contexts: interconnection of people and trade flows, causes and consequences of capital remittances, understanding the macroeconomic impact of migration and the labour market effects of people’s flows. The latest analytical methodologies are employed in all chapters, while interesting policy guidelines emerge from the investigations. The style of the volume makes it accessible for both non-experts and advanced readers interested in this hot topic of today’s world.
Mathematical Statistics for Economics and Business, Second Edition, provides a comprehensive introduction to the principles of mathematical statistics which underpin statistical analyses in the fields of economics, business, and econometrics. The selection of topics in this textbook is designed to provide students with a conceptual foundation that will facilitate a substantial understanding of statistical applications in these subjects. This new edition has been updated throughout and now also includes a downloadable Student Answer Manual containing detailed solutions to half of the over 300 end-of-chapter problems. After introducing the concepts of probability, random variables, and probability density functions, the author develops the key concepts of mathematical statistics, most notably: expectation, sampling, asymptotics, and the main families of distributions. The latter half of the book is then devoted to the theories of estimation and hypothesis testing with associated examples and problems that indicate their wide applicability in economics and business. Features of the new edition include: a reorganization of topic flow and presentation to facilitate reading and understanding; inclusion of additional topics of relevance to statistics and econometric applications; a more streamlined and simple-to-understand notation for multiple integration and multiple summation over general sets or vector arguments; updated examples; new end-of-chapter problems; a solution manual for students; a comprehensive answer manual for instructors; and a theorem and definition map. This book has evolved from numerous graduate courses in mathematical statistics and econometrics taught by the author, and will be ideal for students beginning graduate study as well as for advanced undergraduates.
This book investigates the relationship between environmental degradation and income, focusing on carbon dioxide (CO2) emissions from around the world, to explore the possibility of sustainable development under global warming. Although many researchers have tackled this problem by estimating the Environmental Kuznets Curve (EKC), unlike the approach to sulfur dioxide emissions, there seems to be little consensus about whether EKC is formed with regard to CO2 emissions. Thus, EKC is one of the most controversial issues in the field of environmental economics. This book contributes three points with academic rigor. First, an unbalanced panel dataset containing over 150 countries with the latest CO2 emission data between 1960 and 2010 is constructed. Second, based on this dataset, the CO2 emission-income relationship is analyzed using strict econometric methods such as the dynamic panel model. Third, as it is often pointed out that some factors other than income affect CO2 emission, several variables were added to the estimation model to examine the effects of changes of industrial structure, energy composition, and overseas trade on CO2 emission.
Analyze key indicators more accurately to make smarter market moves The Economic Indicator Handbook helps investors more easily evaluate economic trends, to better inform investment decision making and other key strategic financial planning. Written by a Bloomberg Senior Economist, this book presents a visual distillation of the indicators every investor should follow, with clear explanation of how they're measured, what they mean, and how that should inform investment thinking. The focus on graphics, professional application, Bloomberg terminal functionality, and practicality makes this guide a quick, actionable read that could immediately start improving investment outcomes. Coverage includes gross domestic product, employment data, industrial production, new residential construction, consumer confidence, retail and food service sales, and commodities, plus guidance on the secret indicators few economists know or care about. Past performance can predict future results if you know how to read the indicators. Modern investing requires a careful understanding of the macroeconomic forces that lift and topple markets on a regular basis, and how they shift to move entire economies. This book is a visual guide to recognizing these forces and tracking their behavior, helping investors identify entry and exit points that maximize profit and minimize loss. * Quickly evaluate economic trends * Make more informed investment decisions * Understand the most essential indicators * Translate predictions into profitable actions Savvy market participants know how critical certain indicators are to the formulation of a profitable, effective market strategy. A daily indicator check can inform day-to-day investing, and long-term tracking can result in a stronger, more robust portfolio. For the investor who knows that better information leads to better outcomes, The Economic Indicator Handbook is an exceptionally useful resource.
The most authoritative and up-to-date core econometrics textbook available Econometrics is the quantitative language of economic theory, analysis, and empirical work, and it has become a cornerstone of graduate economics programs. Econometrics provides graduate and PhD students with an essential introduction to this foundational subject in economics and serves as an invaluable reference for researchers and practitioners. This comprehensive textbook teaches fundamental concepts, emphasizes modern, real-world applications, and gives students an intuitive understanding of econometrics. Covers the full breadth of econometric theory and methods with mathematical rigor while emphasizing intuitive explanations that are accessible to students of all backgrounds Draws on integrated, research-level datasets, provided on an accompanying website Discusses linear econometrics, time series, panel data, nonparametric methods, nonlinear econometric models, and modern machine learning Features hundreds of exercises that enable students to learn by doing Includes in-depth appendices on matrix algebra and useful inequalities and a wealth of real-world examples Can serve as a core textbook for a first-year PhD course in econometrics and as a follow-up to Bruce E. Hansen's Probability and Statistics for Economists
The Analytic Hierarchy Process (AHP) has been one of the foremost mathematical methods for decision making with multiple criteria and has been widely studied in the operations research literature as well as applied to solve countless real-world problems. This book is meant to introduce and strengthen the readers' knowledge of the AHP, no matter how familiar they may be with the topic. This book provides a concise, yet self-contained, introduction to the AHP that uses a novel and more pedagogical approach. It begins with an introduction to the principles of the AHP, covering the critical points of the method, as well as some of its applications. Next, the book explores further aspects of the method, including the derivation of the priority vector, the estimation of inconsistency, and the use of AHP for group decisions. Each of these is introduced by relaxing initial assumptions. Furthermore, this booklet covers extensions of AHP, which are typically neglected in elementary expositions of the methods. Such extensions concern different numerical representations of preferences and the interval and fuzzy representations of preferences to account for uncertainty. During the whole exposition, an eye is kept on the most recent developments of the method.
Written for a broad audience this book offers a comprehensive account of early warning systems for hydro meteorological disasters such as floods and storms, and for geological disasters such as earthquakes. One major theme is the increasingly important role in early warning systems played by the rapidly evolving fields of space and information technology. The authors, all experts in their respective fields, offer a comprehensive and in-depth insight into the current and future perspectives for early warning systems. The text is aimed at decision-makers in the political arena, scientists, engineers and those responsible for public communication and dissemination of warnings.
This book investigates the existence of stochastic and deterministic convergence of real output per worker and the sources of output (physical capital per worker, human capital per worker, total factor productivity -TFP- and average annual hours worked) in 21 OECD countries over the period 1970-2011. Towards this end, the authors apply a large battery of panel unit root and stationarity tests, all of which are robust to the presence of cross-sectional dependence. The evidence fails to provide clear-cut evidence of convergence dynamics either in real GDP per worker or in the series of the sources of output. Due to some limitations associated with second-generation panel unit root and stationarity tests, the authors further use the more flexible PANIC approach which provides evidence that real GDP per worker, real physical capital per worker, human capital and average annual hours exhibit some degree of deterministic convergence, whereas TFP series display a high degree of stochastic convergence.
This volume addresses advanced DEA methodology and techniques developed for modeling unique new performance evaluation issues. Many numerical examples, real management cases and verbal descriptions make it very valuable for researchers and practitioners.
The Who, What, and Where of America is designed to provide a sampling of key demographic information. It covers the United States, every state, each metropolitan statistical area, and all the counties and cities with a population of 20,000 or more. Who: Age, Race and Ethnicity, and Household Structure What: Education, Employment, and Income Where: Migration, Housing, and Transportation Each part is preceded by highlights and ranking tables that show how areas diverge from the national norm. These research aids are invaluable for understanding data from the ACS and for highlighting what it tells us about who we are, what we do, and where we live. Each topic is divided into four tables revealing the results of the data collected from different types of geographic areas in the United States, generally with populations greater than 20,000. ·Table A. States ·Table B. Counties ·Table C. Metropolitan Areas ·Table D. Cities In this edition, you will find social and economic estimates on the ways American communities are changing with regard to the following: ·Age and race ·Health care coverage ·Marital history ·Education attainment ·Income and occupation ·Commute time to work ·Employment status ·Home values and monthly costs ·Veteran status ·Size of home or rental unit This title is the latest in the County and City Extra Series of publications from Bernan Press. Other titles include County and City Extra, County and City Extra: Special Decennial Census Edition, and Places, Towns, and Townships.
This book presents modern developments in time series econometrics that are applied to macroeconomic and financial time series, bridging the gap between methods and realistic applications. It presents the most important approaches to the analysis of time series, which may be stationary or nonstationary. Modelling and forecasting univariate time series is the starting point. For multiple stationary time series, Granger causality tests and vector autogressive models are presented. As the modelling of nonstationary uni- or multivariate time series is most important for real applied work, unit root and cointegration analysis as well as vector error correction models are a central topic. Tools for analysing nonstationary data are then transferred to the panel framework. Modelling the (multivariate) volatility of financial time series with autogressive conditional heteroskedastic models is also treated.
Students in both social and natural sciences often seek regression methods to explain the frequency of events, such as visits to a doctor, auto accidents, or new patents awarded. This book provides the most comprehensive and up-to-date account of models and methods to interpret such data. The authors have conducted research in the field for more than twenty-five years. In this book, they combine theory and practice to make sophisticated methods of analysis accessible to researchers and practitioners working with widely different types of data and software in areas such as applied statistics, econometrics, marketing, operations research, actuarial studies, demography, biostatistics, and quantitative social sciences. The book may be used as a reference work on count models or by students seeking an authoritative overview. Complementary material in the form of data sets, template programs, and bibliographic resources can be accessed on the Internet through the authors' homepages. This second edition is an expanded and updated version of the first, with new empirical examples and more than one hundred new references added. The new material includes new theoretical topics, an updated and expanded treatment of cross-section models, coverage of bootstrap-based and simulation-based inference, expanded treatment of time series, multivariate and panel data, expanded treatment of endogenous regressors, coverage of quantile count regression, and a new chapter on Bayesian methods.
One of the best known statisticians of the 20th century, Frederick Mosteller has inspired numerous statisticians and other scientists by his creative approach to statistics and its applications. This volume collects 40 of his most original and influential papers, capturing the variety and depth of his writings. It is hoped that sharing these writings with a new generation of researchers will inspire them to build upon his insights and efforts.
The volatility of financial returns changes over time and, for the last thirty years, Generalized Autoregressive Conditional Heteroscedasticity (GARCH) models have provided the principal means of analyzing, modeling, and monitoring such changes. Taking into account that financial returns typically exhibit heavy tails that is, extreme values can occur from time to time Andrew Harvey's new book shows how a small but radical change in the way GARCH models are formulated leads to a resolution of many of the theoretical problems inherent in the statistical theory. The approach can also be applied to other aspects of volatility, such as those arising from data on the range of returns and the time between trades. Furthermore, the more general class of Dynamic Conditional Score models extends to robust modeling of outliers in the levels of time series and to the treatment of time-varying relationships. As such, there are applications not only to financial data but also to macroeconomic time series and to time series in other disciplines. The statistical theory draws on basic principles of maximum likelihood estimation and, by doing so, leads to an elegant and unified treatment of nonlinear time-series modeling. The practical value of the proposed models is illustrated by fitting them to real data sets."
The volatility of financial returns changes over time and, for the last thirty years, Generalized Autoregressive Conditional Heteroscedasticity (GARCH) models have provided the principal means of analyzing, modeling, and monitoring such changes. Taking into account that financial returns typically exhibit heavy tails that is, extreme values can occur from time to time Andrew Harvey's new book shows how a small but radical change in the way GARCH models are formulated leads to a resolution of many of the theoretical problems inherent in the statistical theory. The approach can also be applied to other aspects of volatility, such as those arising from data on the range of returns and the time between trades. Furthermore, the more general class of Dynamic Conditional Score models extends to robust modeling of outliers in the levels of time series and to the treatment of time-varying relationships. As such, there are applications not only to financial data but also to macroeconomic time series and to time series in other disciplines. The statistical theory draws on basic principles of maximum likelihood estimation and, by doing so, leads to an elegant and unified treatment of nonlinear time-series modeling. The practical value of the proposed models is illustrated by fitting them to real data sets." |
You may like...
It Shouldn't be This Way - The Failure…
Robert L. Kane, Joan C. West
Paperback
R1,132
Discovery Miles 11 320
Care Between Work and Welfare in…
B Pfau-Effinger, T. Rostgaard
Hardcover
R1,410
Discovery Miles 14 100
Numerical Modeling of Masonry and…
Bahman Ghiassi, Gabriele Milani
Paperback
R6,447
Discovery Miles 64 470
|