![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics
This book bridges the gap between economic theory and spatial econometric techniques. It is accessible to those with only a basic statistical background and no prior knowledge of spatial econometric methods. It provides a comprehensive treatment of the topic, motivating the reader with examples and analysis. The volume provides a rigorous treatment of the basic spatial linear model, and it discusses the violations of the classical regression assumptions that occur when dealing with spatial data.
Praise for the first edition: [This book] reflects the extensive experience and significant contributions of the author to non-linear and non-Gaussian modeling. ... [It] is a valuable book, especially with its broad and accessible introduction of models in the state-space framework. -Statistics in Medicine What distinguishes this book from comparable introductory texts is the use of state-space modeling. Along with this come a number of valuable tools for recursive filtering and smoothing, including the Kalman filter, as well as non-Gaussian and sequential Monte Carlo filters. -MAA Reviews Introduction to Time Series Modeling with Applications in R, Second Edition covers numerous stationary and nonstationary time series models and tools for estimating and utilizing them. The goal of this book is to enable readers to build their own models to understand, predict and master time series. The second edition makes it possible for readers to reproduce examples in this book by using the freely available R package TSSS to perform computations for their own real-world time series problems. This book employs the state-space model as a generic tool for time series modeling and presents the Kalman filter, the non-Gaussian filter and the particle filter as convenient tools for recursive estimation for state-space models. Further, it also takes a unified approach based on the entropy maximization principle and employs various methods of parameter estimation and model selection, including the least squares method, the maximum likelihood method, recursive estimation for state-space models and model selection by AIC. Along with the standard stationary time series models, such as the AR and ARMA models, the book also introduces nonstationary time series models such as the locally stationary AR model, the trend model, the seasonal adjustment model, the time-varying coefficient AR model and nonlinear non-Gaussian state-space models. About the Author: Genshiro Kitagawa is a project professor at the University of Tokyo, the former Director-General of the Institute of Statistical Mathematics, and the former President of the Research Organization of Information and Systems.
Occupational licensure, including regulation of the professions, dates back to the medieval period. While the guilds that performed this regulatory function have long since vanished, professional regulation continues to this day. For instance, in the United States, 22 per cent of American workers must hold licenses simply to do their jobs. While long-established professions have more settled regulatory paradigms, the case studies in Paradoxes of Professional Regulation explore other professions, taking note of incompetent services and the serious risks they pose to the physical, mental, or emotional health, financial well-being, or legal status of uninformed consumers. Michael J. Trebilcock examines five case studies of the regulation of diverse professions, including alternative medicine, mental health care provision, financial planning, immigration consulting, and legal services. Noting the widely divergent approaches to the regulation of the same professions across different jurisdictions - paradoxes of professional regulation - the book is an attempt to develop a set of regulatory principles for the future. In its comparative approach, Paradoxes of Professional Regulation gets at the heart of the tensions influencing the regulatory landscape, and works toward practical lessons for bringing greater coherence to the way in which professions are regulated.
The role of franchising on industry evolution is explored in this book both in terms of the emergence of franchising and its impact on industry structure. Examining literature and statistical information the first section provides an overview of franchising. The Role of Franchising on Industry Evolution then focuses on two core elements; the emergence or franchising and the contextual drivers prompting its adoption, and the impact of franchising on industry-level structural changes. Through two industry case studies, the author demonstrates how franchising has the ability to fundamentally transform an industry's structure from one of fragmentation to one of consolidation.
This book discusses the developments in trade theories, including new-new trade models that account for firm level trade flows, trade growth accounting using inverse gravity models (including distortions in gravity models), the impact of trade liberalization under the aegis of regional and multilateral liberalization efforts of economies using partial and general equilibrium analysis, methodologies of constructing ad valorem equivalents of non-tariff barriers, volatility spillover effects of financial and exchange rate markets. The main purpose of the book is to guide researchers working in the area of international trade, especially focused on empirical analysis of trade policy issues by updating their knowledge on issues related to trade theory, empirical methods, and their applications. The book would prove useful for policy makers, academicians, and researchers.
This book addresses one of the most important research activities in empirical macroeconomics. It provides a course of advanced but intuitive methods and tools enabling the spatial and temporal disaggregation of basic macroeconomic variables and the assessment of the statistical uncertainty of the outcomes of disaggregation. The empirical analysis focuses mainly on GDP and its growth in the context of Poland. However, all of the methods discussed can be easily applied to other countries. The approach used in the book views spatial and temporal disaggregation as a special case of the estimation of missing observations (a topic on missing data analysis). The book presents an econometric course of models of Seemingly Unrelated Regression Equations (SURE). The main advantage of using the SURE specification is to tackle the presented research problem so that it allows for the heterogeneity of the parameters describing relations between macroeconomic indicators. The book contains model specification, as well as descriptions of stochastic assumptions and resulting procedures of estimation and testing. The method also addresses uncertainty in the estimates produced. All of the necessary tests and assumptions are presented in detail. The results are designed to serve as a source of invaluable information making regional analyses more convenient and - more importantly - comparable. It will create a solid basis for making conclusions and recommendations concerning regional economic policy in Poland, particularly regarding the assessment of the economic situation. This is essential reading for academics, researchers, and economists with regional analysis as their field of expertise, as well as central bankers and policymakers.
Mathematical Economics is an authoritative collection of the most influential contributions essential to an understanding of this important area of economic science. These seminal papers illustrate the development of the field from its inception in the 19th century up to the present, and exhibit the power of mathematics to lead to new thinking which can illuminate the scientific structures underlying economic arguments. Many of these papers started new fields of economics, influencing deeply the way economists think about their world. They illustrate the extensive range of topics to which mathematics has been applied productively, and show the areas of mathematics which have proved valuable, including functional analysis, linear algebra, algebraic and differential topology, stochastic processes and dynamical systems. They also show the extent to which today's policy analysis rests on yesterday's mathematical economics. Anyone with an interest in economics as a science will find this collection indispensable. The collection is an essential part of any course using mathematical economics.
With the rapidly advancing fields of Data Analytics and Computational Statistics, it's important to keep up with current trends, methodologies, and applications. This book investigates the role of data mining in computational statistics for machine learning. It offers applications that can be used in various domains and examines the role of transformation functions in optimizing problem statements. Data Analytics, Computational Statistics, and Operations Research for Engineers: Methodologies and Applications presents applications of computationally intensive methods, inference techniques, and survival analysis models. It discusses how data mining extracts information and how machine learning improves the computational model based on the new information. Those interested in this reference work will include students, professionals, and researchers working in the areas of data mining, computational statistics, operations research, and machine learning.
"A book perfect for this moment" -Katherine M. O'Regan, Former Assistant Secretary, US Department of Housing and Urban Development More than fifty years after the passage of the Fair Housing Act, American cities remain divided along the very same lines that this landmark legislation explicitly outlawed. Keeping Races in Their Places tells the story of these lines-who drew them, why they drew them, where they drew them, and how they continue to circumscribe residents' opportunities to this very day. Weaving together sophisticated statistical analyses of more than a century's worth of data with an engaging, accessible narrative that brings the numbers to life, Keeping Races in Their Places exposes the entrenched effects of redlining on American communities. This one-of-a-kind contribution to the real estate and urban economics literature applies the author's original geographic information systems analyses to historical maps to reveal redlining's causal role in shaping today's cities. Spanning the era from the Great Migration to the Great Recession, Keeping Races in Their Places uncovers the roots of the Black-white wealth gap, the subprime lending crisis, and today's lack of affordable housing in maps created by banks nearly a century ago. Most of all, it offers hope that with the latest scholarly tools we can pinpoint how things went wrong-and what we must do to make them right.
Now in its third edition, Essential Econometric Techniques: A Guide to Concepts and Applications is a concise, student-friendly textbook which provides an introductory grounding in econometrics, with an emphasis on the proper application and interpretation of results. Drawing on the author's extensive teaching experience, this book offers intuitive explanations of concepts such as heteroskedasticity and serial correlation, and provides step-by-step overviews of each key topic. This new edition contains more applications, brings in new material including a dedicated chapter on panel data techniques, and moves the theoretical proofs to appendices. After Chapter 7, students will be able to design and conduct rudimentary econometric research. The next chapters cover multicollinearity, heteroskedasticity, and autocorrelation, followed by techniques for time-series analysis and panel data. Excel data sets for the end-of-chapter problems are available as a digital supplement. A solutions manual is also available for instructors, as well as PowerPoint slides for each chapter. Essential Econometric Techniques shows students how economic hypotheses can be questioned and tested using real-world data, and is the ideal supplementary text for all introductory econometrics courses.
This book describes the functions frequently used in deep neural networks. For this purpose, 37 activation functions are explained both mathematically and visually, and given with their LaTeX implementations due to their common use in scientific articles.
Based on economic knowledge and logical reasoning, this book proposes a solution to economic recessions and offers a route for societal change to end capitalism. The author starts with a brief review of the history of economics, and then questions and rejects the trend of recent decades that has seen econometrics replace economic theory. By reviewing the different schools of economic thought and by examining the limitations of existing theories to business cycles and economic growth, the author forms a new theory to explain cyclic economic growth. According to this theory, economic recessions result from innovation scarcity, which in turn results from the flawed design of the patent system. The author suggests a new design for the patent system and envisions that the new design would bring about large economic and societal changes. Under this new patent system, the synergy of the patent and capital markets would ensure that economic recessions could be avoided and that the economy would grow at the highest speed.
Contains information for using R software with the examples in the textbook Sampling: Design and Analysis, 3rd edition by Sharon L. Lohr.
This is the perfect (and essential) supplement for all econometrics
classes--from a rigorous first undergraduate course, to a first
master's, to a PhD course.
Applied data-centric social sciences aim to develop both methodology and practical applications of various fields of sciences and businesses with rich data. Specifically, in the social sciences, a vast amount of data on human activities may be useful for understanding collective human nature. In this book, the author introduces several mathematical techniques for handling a huge volume of data and analyzing collective human behavior. The book is constructed from data-oriented investigation, with mathematical methods and expressions used for dealing with data for several specific problems. The fundamental philosophy underlying the book is that both mathematical and physical concepts are determined by the purposes of data analysis. This philosophy is shown throughout exemplar studies of several fields in socio-economic systems. From a data-centric point of view, the author proposes a concept that may change people s minds and cause them to start thinking from the basis of data. Several goals underlie the chapters of the book. The first is to describe mathematical and statistical methods for data analysis, and toward that end the author delineates methods with actual data in each chapter. The second is to find a cyber-physical link between data and data-generating mechanisms, as data are always provided by some kind of data-generating process in the real world. The third goal is to provide an impetus for the concepts and methodology set forth in this book to be applied to socio-economic systems."
Using data from the World Values Survey, this book sheds light on the link between happiness and the social group to which one belongs. The work is based on a rigorous statistical analysis of differences in the probability of happiness and life satisfaction between the predominant social group and subordinate groups. The cases of India and South Africa receive deep attention in dedicated chapters on cast and race, with other chapters considering issues such as cultural bias, religion, patriarchy, and gender. An additional chapter offers a global perspective. On top of this, the longitudinal nature of the data facilitates an examination of how world happiness has evolved between 1994 and 2014. This book will be a valuable reference for advanced students, scholars and policymakers involved in development economics, well-being, development geography, and sociology.
This study examines the determinants of current account, export market share and exchange rates. The author identifies key determinants using Bayesian Model Averaging, which allows evaluation of probability that each variable is in fact a determinant of the analysed competitiveness measure. The main implication of the results presented in the study is that increasing international competitiveness is a gradual process that requires institutional and technological changes rather than short-term adjustments in relative prices.
1. Material on single asset problems, market timing, unconditional and conditional portfolio problems, hedged portfolios. 2. Inference via both Frequentist and Bayesian paradigms. 3. A comprehensive treatment of overoptimism and overfitting of trading strategies. 4. Advice on backtesting strategies. 5. Dozens of examples and hundreds of exercises for self study.
There isn't a book currently on the market which focuses on multiple hypotheses testing. - Can be used on a range of course, including social & behavioral sciences, biological sciences, as well as professional researchers. Includes various examples of the multiple hypotheses method in practice in a variety of fields, including: sport and crime.
Thoroughly updated throughout, A First Course in Linear Model Theory, Second Edition is an intermediate-level statistics text that fills an important gap by presenting the theory of linear statistical models at a level appropriate for senior undergraduate or first-year graduate students. With an innovative approach, the authors introduce to students the mathematical and statistical concepts and tools that form a foundation for studying the theory and applications of both univariate and multivariate linear models. In addition to adding R functionality, this second edition features three new chapters and several sections on new topics that are extremely relevant to the current research in statistical methodology. Revised or expanded topics include linear fixed, random and mixed effects models, generalized linear models, Bayesian and hierarchical linear models, model selection, multiple comparisons, and regularized and robust regression. New to the Second Edition: Coverage of inference for linear models has been expanded into two chapters. Expanded coverage of multiple comparisons, random and mixed effects models, model selection, and missing data. A new chapter on generalized linear models (Chapter 12). A new section on multivariate linear models in Chapter 13, and expanded coverage of the Bayesian linear models and longitudinal models. A new section on regularized regression in Chapter 14. Detailed data illustrations using R. The authors' fresh approach, methodical presentation, wealth of examples, use of R, and introduction to topics beyond the classical theory set this book apart from other texts on linear models. It forms a refreshing and invaluable first step in students' study of advanced linear models, generalized linear models, nonlinear models, and dynamic models.
Master key spreadsheet and business analytics skills with SPREADSHEET MODELING AND DECISION ANALYSIS: A PRACTICAL INTRODUCTION TO BUSINESS ANALYTICS, 9E, written by respected business analytics innovator Cliff Ragsdale. This edition's clear presentation, realistic examples, fascinating topics and valuable software provide everything you need to become proficient in today's most widely used business analytics techniques using the latest version of Excel (R) in Microsoft (R) Office 365 or Office 2019. Become skilled in the newest Excel functions as well as Analytic Solver (R) and Data Mining add-ins. This edition helps you develop both algebraic and spreadsheet modeling skills. Step-by-step instructions and annotated, full-color screen images make examples easy to follow and show you how to apply what you learn about descriptive, predictive and prescriptive analytics to real business situations. WebAssign online tools and author-created videos further strengthen understanding.
Much of our thinking is flawed because it is based on faulty intuition. By using the framework and tools of probability and statistics, we can overcome this to provide solutions to many real-world problems and paradoxes. We show how to do this, and find answers that are frequently very contrary to what we might expect. Along the way, we venture into diverse realms and thought experiments which challenge the way that we see the world. Features: An insightful and engaging discussion of some of the key ideas of probabilistic and statistical thinking Many classic and novel problems, paradoxes, and puzzles An exploration of some of the big questions involving the use of choice and reason in an uncertain world The application of probability, statistics, and Bayesian methods to a wide range of subjects, including economics, finance, law, and medicine Exercises, references, and links for those wishing to cross-reference or to probe further Solutions to exercises at the end of the book This book should serve as an invaluable and fascinating resource for university, college, and high school students who wish to extend their reading, as well as for teachers and lecturers who want to liven up their courses while retaining academic rigour. It will also appeal to anyone who wishes to develop skills with numbers or has an interest in the many statistical and other paradoxes that permeate our lives. Indeed, anyone studying the sciences, social sciences, or humanities on a formal or informal basis will enjoy and benefit from this book.
*Furnishes a thorough introduction and detailed information about the linear regression model, including how to understand and interpret its results, test assumptions, and adapt the model when assumptions are not satisfied. *Uses numerous graphs in R to illustrate the model's results, assumptions, and other features. *Does not assume a background in calculus or linear algebra; rather, an introductory statistics course and familiarity with elementary algebra are sufficient. *Provides many examples using real world datasets relevant to various academic disciplines. *Fully integrates the R software environment in its numerous examples.
The development of economics changed dramatically during the twentieth century with the emergence of econometrics, macroeconomics and a more scientific approach in general. One of the key individuals in the transformation of economics was Ragnar Frisch, professor at the University of Oslo and the first Nobel Laureate in economics in 1969. He was a co-founder of the Econometric Society in 1930 (after having coined the word econometrics in 1926) and edited the journal Econometrics for twenty-two years. The discovery of the manuscripts of a series of eight lectures given by Frisch at the Henri Poincare Institute in March-April 1933 on The Problems and Methods of Econometrics will enable economists to more fully understand his overall vision of econometrics. This book is a rare exhibition of Frisch's overview on econometrics and is published here in English for the first time. Edited and with an introduction by Olav Bjerkholt and Ariane Dupont-Kieffer, Frisch's eight lectures provide an accessible and astute discussion of econometric issues from philosophical foundations to practical procedures. Concerning the development of economics in the twentieth century and the broader visions about economic science in general and econometrics in particular held by Ragnar Frisch, this book will appeal to anyone with an interest in the history of economics and econometrics.
The Analytic Network Process (ANP), developed by Thomas Saaty in his work on multicriteria decision making, applies network structures with dependence and feedback to complex decision making. This new edition of Decision Making with the Analytic Network Process is a selection of the latest applications of ANP to economic, social and political decisions, and also to technological design. The ANP is a methodological tool that is helpful to organize knowledge and thinking, elicit judgments registered in both in memory and in feelings, quantify the judgments and derive priorities from them, and finally synthesize these diverse priorities into a single mathematically and logically justifiable overall outcome. In the process of deriving this outcome, the ANP also allows for the representation and synthesis of diverse opinions in the midst of discussion and debate. The book focuses on the application of the ANP in three different areas: economics, the social sciences and the linking of measurement with human values. Economists can use the ANP for an alternate approach for dealing with economic problems than the usual mathematical models on which economics bases its quantitative thinking. For psychologists, sociologists and political scientists, the ANP offers the methodology they have sought for some time to quantify and derive measurements for intangibles. Finally the book applies the ANP to provide people in the physical and engineering sciences with a quantitative method to link hard measurement to human values. In such a process, one is able to interpret the true meaning of measurements made on a uniform scale using a unit. |
You may like...
Operations And Supply Chain Management
David Collier, James Evans
Hardcover
The Oxford Handbook of the Economics of…
Yann Bramoulle, Andrea Galeotti, …
Hardcover
R5,455
Discovery Miles 54 550
Design and Analysis of Time Series…
Richard McCleary, David McDowall, …
Hardcover
R3,286
Discovery Miles 32 860
Introductory Econometrics - A Modern…
Jeffrey Wooldridge
Hardcover
Introduction to Computational Economics…
Hans Fehr, Fabian Kindermann
Hardcover
R4,258
Discovery Miles 42 580
Agent-Based Modeling and Network…
Akira Namatame, Shu-Heng Chen
Hardcover
R2,970
Discovery Miles 29 700
|