![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > Economic statistics
This book is a collection of essays written in honor of Professor Peter C. B. Phillips of Yale University by some of his former students. The essays analyze a number of important issues in econometrics, all of which Professor Phillips has directly influenced through his seminal scholarly contribution as well as through his remarkable achievements as a teacher. The essays are organized to cover topics in higher-order asymptotics, deficient instruments, nonstationary, LAD and quantile regression, and nonstationary panels. These topics span both theoretical and applied approaches and are intended for use by professionals and advanced graduate students.
Originally published in 2005, Weather Derivative Valuation covers all the meteorological, statistical, financial and mathematical issues that arise in the pricing and risk management of weather derivatives. There are chapters on meteorological data and data cleaning, the modelling and pricing of single weather derivatives, the modelling and valuation of portfolios, the use of weather and seasonal forecasts in the pricing of weather derivatives, arbitrage pricing for weather derivatives, risk management, and the modelling of temperature, wind and precipitation. Specific issues covered in detail include the analysis of uncertainty in weather derivative pricing, time-series modelling of daily temperatures, the creation and use of probabilistic meteorological forecasts and the derivation of the weather derivative version of the Black-Scholes equation of mathematical finance. Written by consultants who work within the weather derivative industry, this book is packed with practical information and theoretical insight into the world of weather derivative pricing.
Das Buch ist als einfuhrendes Lehrbuch konzipiert. Hauptadressaten sind Studierende wirtschaftswissenschaftlicher Studiengange und Praktiker aus der Wirtschaft, die sich mit grundlegenden Arbeits- und Methodengebieten der quantitativen Datenanalyse vertraut machen wollen. Haufig ist beiden Zielgruppen die Statistik nicht leicht zuganglich. Hier setzt das vorliegende Lehrbuch an, das den Lernprozess durch eine Reihe von Massnahmen gezielt erleichtern soll: Stoffauswahl: fur Wirtschaftler typische Anwendungsgebiete und in der Praxis verwendete und bewahrte statistische Methoden. Kapitelgliederung: systematisch mit Einfuhrung, typischem Praxisbeispiel, Losungsansatze, Methodenbeschreibung, Ergebnisinterpretation, Wurdigung, Formelzusammenstellung, Ubung. Methodenbeschreibung: systematisch mit Konzept (in Worten), Operationalisierung und Formalisierung (in Symbolen), Durchfuhrung am Einfuhrungsbeispiel (in Zahlen). Formeln: Entwicklung von Formeln aus Methodenbeschreibung, mathematische Ableitungen nur dort, wo unverzichtbar. Abbildungen: Tabellen, Diagramme und Bilder zur Veranschaulichung. Ubungen: kapitelweise mit Fragen, Aufgaben und Musterlosungen. Aus dem Inhalt: 1. Beschreibende Statistik mit Grundlagen, Aufbereitung, Prasentation und Auswertung (Kenngrossen) univariater Querschnittdaten, Konzentrationsanalyse, Langsschnittdatenanalyse mit Mass- und Indexzahlen, mehrdimensionale Analyse. 2. Analysierende Statistik mit Regression, Korrelation, Kontingenz, Zeitreihenanalyse und zeitreihenbasierter Prognoserechnung. 3. Wahrscheinlichkeitsanalyse mit Grundlagen, Zufallsgrossen und Wahrscheinlich-keitsverteilungen sowie wichtigen diskreten und stetigen Verteilungsmodellen. 4. Schliessende Statistik mit Stichprobenstatistik, Schatzen und Testen bei univariaten Verteilungen und von Zusammenhangen."
This 2005 volume contains the papers presented in honor of the lifelong achievements of Thomas J. Rothenberg on the occasion of his retirement. The authors of the chapters include many of the leading econometricians of our day, and the chapters address topics of current research significance in econometric theory. The chapters cover four themes: identification and efficient estimation in econometrics, asymptotic approximations to the distributions of econometric estimators and tests, inference involving potentially nonstationary time series, such as processes that might have a unit autoregressive root, and nonparametric and semiparametric inference. Several of the chapters provide overviews and treatments of basic conceptual issues, while others advance our understanding of the properties of existing econometric procedures and/or propose others. Specific topics include identification in nonlinear models, inference with weak instruments, tests for nonstationary in time series and panel data, generalized empirical likelihood estimation, and the bootstrap.
The idea that simplicity matters in science is as old as science itself, with the much cited example of Ockham's Razor, 'entia non sunt multiplicanda praeter necessitatem': entities are not to be multiplied beyond necessity. A problem with Ockham's razor is that nearly everybody seems to accept it, but few are able to define its exact meaning and to make it operational in a non-arbitrary way. Using a multidisciplinary perspective including philosophers, mathematicians, econometricians and economists, this 2002 monograph examines simplicity by asking six questions: what is meant by simplicity? How is simplicity measured? Is there an optimum trade-off between simplicity and goodness-of-fit? What is the relation between simplicity and empirical modelling? What is the relation between simplicity and prediction? What is the connection between simplicity and convenience? The book concludes with reflections on simplicity by Nobel Laureates in Economics.
A complete set of statistical tools for beginning financial analysts from a leading authority Written by one of the leading experts on the topic, An Introduction to Analysis of Financial Data with R explores basic concepts of visualization of financial data. Through a fundamental balance between theory and applications, the book supplies readers with an accessible approach to financial econometric models and their applications to real-world empirical research. The author supplies a hands-on introduction to the analysis of financial data using the freely available R software package and case studies to illustrate actual implementations of the discussed methods. The book begins with the basics of financial data, discussing their summary statistics and related visualization methods. Subsequent chapters explore basic time series analysis and simple econometric models for business, finance, and economics as well as related topics including: * Linear time series analysis, with coverage of exponential smoothing for forecasting and methods for model comparison * Different approaches to calculating asset volatility and various volatility models * High-frequency financial data and simple models for price changes, trading intensity, and realized volatility * Quantitative methods for risk management, including value at risk and conditional value at risk * Econometric and statistical methods for risk assessment based on extreme value theory and quantile regression Throughout the book, the visual nature of the topic is showcased through graphical representations in R, and two detailed case studies demonstrate the relevance of statistics in finance. A related website features additional data sets and R scripts so readers can create their own simulations and test their comprehension of the presented techniques. An Introduction to Analysis of Financial Data with R is an excellent book for introductory courses on time series and business statistics at the upper-undergraduate and graduate level. The book is also an excellent resource for researchers and practitioners in the fields of business, finance, and economics who would like to enhance their understanding of financial data and today's financial markets.
Most academic and policy commentary represents adverse selection as a severe problem in insurance, which should always be deprecated, avoided or minimised. This book gives a contrary view. It details the exaggeration of adverse selection in insurers' rhetoric and insurance economics, and presents evidence that in many insurance markets, adverse selection is weaker than most commentators suggest. A novel arithmetical argument shows that from a public policy perspective, 'weak' adverse selection can be a good thing. This is because a degree of adverse selection is needed to maximise 'loss coverage', the expected fraction of the population's losses which is compensated by insurance. This book will be valuable for those interested in public policy arguments about insurance and discrimination: academics (in economics, law and social policy), policymakers, actuaries, underwriters, disability activists, geneticists and other medical professionals.
This book describes the classical axiomatic theories of decision under uncertainty, as well as critiques thereof and alternative theories. It focuses on the meaning of probability, discussing some definitions and surveying their scope of applicability. The behavioral definition of subjective probability serves as a way to present the classical theories, culminating in Savage's theorem. The limitations of this result as a definition of probability lead to two directions - first, similar behavioral definitions of more general theories, such as non-additive probabilities and multiple priors, and second, cognitive derivations based on case-based techniques.
Price and quantity indices are important, much-used measuring instruments, and it is therefore necessary to have a good understanding of their properties. When it was published, this book is the first comprehensive text on index number theory since Irving Fisher's 1922 The Making of Index Numbers. The book covers intertemporal and interspatial comparisons; ratio- and difference-type measures; discrete and continuous time environments; and upper- and lower-level indices. Guided by economic insights, this book develops the instrumental or axiomatic approach. There is no role for behavioural assumptions. In addition to subject matter chapters, two entire chapters are devoted to the rich history of the subject.
Do economics and statistics succeed in explaining human social behaviour? To answer this question. Leland Gerson Neuberg studies some pioneering controlled social experiments. Starting in the late 1960s, economists and statisticians sought to improve social policy formation with random assignment experiments such as those that provided income guarantees in the form of a negative income tax. This book explores anomalies in the conceptual basis of such experiments and in the foundations of statistics and economics more generally. Scientific inquiry always faces certain philosophical problems. Controlled experiments of human social behaviour, however, cannot avoid some methodological difficulties not evident in physical science experiments. Drawing upon several examples, the author argues that methodological anomalies prevent microeconomics and statistics from explaining human social behaviour as coherently as the physical sciences explain nature. He concludes that controlled social experiments are a frequently overrated tool for social policy improvement.
This is a comprehensive source of official statistics for the regions and countries of the UK.It is an official publication of the Office for National Statistics (ONS), therefore providing the most authoritative collection of statistics available. It is updated annually, the type and format of the information constantly evolves to take account of new or revised material and reflects current priorities and initiatives. It contains a wide range of demographic, social, industrial and economic statistics which provide insight into aspects of life within all UK regions. Data is presented clearly in combination of tables, maps and charts providing the ideal tool for researching UK regions.Regional Trends is a comprehensive source of official statistics for the regions and countries of the UK. This edition includes a wide range of demographic, social, industrial and economic statistics, covering aspects of life within all areas of the UK. The data are presented clearly in a combination of tables, maps and charts.
Random set theory is a fascinating branch of mathematics that amalgamates techniques from topology, convex geometry, and probability theory. Social scientists routinely conduct empirical work with data and modelling assumptions that reveal a set to which the parameter of interest belongs, but not its exact value. Random set theory provides a coherent mathematical framework to conduct identification analysis and statistical inference in this setting and has become a fundamental tool in econometrics and finance. This is the first book dedicated to the use of the theory in econometrics, written to be accessible for readers without a background in pure mathematics. Molchanov and Molinari define the basics of the theory and illustrate the mathematical concepts by their application in the analysis of econometric models. The book includes sets of exercises to accompany each chapter as well as examples to help readers apply the theory effectively.
This must-have manual provides detailed solutions to all of the 300 exercises in Dickson, Hardy and Waters' Actuarial Mathematics for Life Contingent Risks, 3 edition. This groundbreaking text on the modern mathematics of life insurance is required reading for the Society of Actuaries' (SOA) LTAM Exam. The new edition treats a wide range of newer insurance contracts such as critical illness and long-term care insurance; pension valuation material has been expanded; and two new chapters have been added on developing models from mortality data and on changing mortality. Beyond professional examinations, the textbook and solutions manual offer readers the opportunity to develop insight and understanding through guided hands-on work, and also offer practical advice for solving problems using straightforward, intuitive numerical methods. Companion Excel spreadsheets illustrating these techniques are available for free download.
Mathematical models in the social sciences have become increasingly sophisticated and widespread in the last decade. This period has also seen many critiques, most lamenting the sacrifices incurred in pursuit of mathematical rigor. If, as critics argue, our ability to understand the world has not improved during the mathematization of the social sciences, we might want to adopt a different paradigm. This book examines the three main fields of mathematical modeling - game theory, statistics, and computational methods - and proposes a new framework for modeling. Unlike previous treatments which view each field separately, the treatment provides a framework that spans and incorporates the different methodological approaches. The goal is to arrive at a new vision of modeling that allows researchers to solve more complex problems in the social sciences. Additionally, a special emphasis is placed upon the role of computational modeling in the social sciences.
Mathematical models in the social sciences have become increasingly sophisticated and widespread in the last decade. This period has also seen many critiques, most lamenting the sacrifices incurred in pursuit of mathematical rigor. If, as critics argue, our ability to understand the world has not improved during the mathematization of the social sciences, we might want to adopt a different paradigm. This book examines the three main fields of mathematical modeling - game theory, statistics, and computational methods - and proposes a new framework for modeling. Unlike previous treatments which view each field separately, the treatment provides a framework that spans and incorporates the different methodological approaches. The goal is to arrive at a new vision of modeling that allows researchers to solve more complex problems in the social sciences. Additionally, a special emphasis is placed upon the role of computational modeling in the social sciences.
In many disciplines of science it is vital to know the effect of a 'treatment' on a response variable of interest; the effect being known as the 'treatment effect'. Here, the treatment can be a drug, an education program or an economic policy, and the response variable can be an illness, academic achievement or GDP. Once the effect is found, it is possible to intervene to adjust the treatment and attain a desired level of the response variable. A basic way to measure the treatment effect is to compare two groups, one of which received the treatment and the other did not. If the two groups are homogenous in all aspects other than their treatment status, then the difference between their response outcomes is the desired treatment effect. But if they differ in some aspects in addition to the treatment status, the difference in the response outcomes may be due to the combined influence of more than one factor. In non-experimental data where the treatment is not randomly assigned but self-selected, the subjects tend to differ in observed or unobserved characteristics. It is therefore imperative that the comparison be carried out with subjects similar in their characteristics. This book explains how this problem can be overcome so the attributable effect of the treatment can be found. This book brings to the fore recent advances in econometrics for treatment effects. The purpose of this book is to put together various economic treatments effect models in a coherent fashion, make it clear which can be parameters of interest, and show how they can be identified and estimated under weak assumptions. The emphasis throughout the book is on semi- and non-parametric estimation methods, but traditional parametric approaches are also discussed. This book is ideally suited to researchers and graduate students with a basic knowledge of econometrics.
Most textbooks on regression focus on theory and the simplest of examples. Real statistical problems, however, are complex and subtle. This is not a book about the theory of regression. It is about using regression to solve real problems of comparison, estimation, prediction, and causal inference. Unlike other books, it focuses on practical issues such as sample size and missing data and a wide range of goals and techniques. It jumps right in to methods and computer code you can use immediately. Real examples, real stories from the authors' experience demonstrate what regression can do and its limitations, with practical advice for understanding assumptions and implementing methods for experiments and observational studies. They make a smooth transition to logistic regression and GLM. The emphasis is on computation in R and Stan rather than derivations, with code available online. Graphics and presentation aid understanding of the models and model fitting.
This book is intended for use in a rigorous introductory PhD level course in econometrics, or in a field course in econometric theory. It covers the measure-theoretical foundation of probability theory, the multivariate normal distribution with its application to classical linear regression analysis, various laws of large numbers, central limit theorems and related results for independent random variables as well as for stationary time series, with applications to asymptotic inference of M-estimators, and maximum likelihood theory. Some chapters have their own appendices containing the more advanced topics and/or difficult proofs. Moreover, there are three appendices with material that is supposed to be known. Appendix I contains a comprehensive review of linear algebra, including all the proofs. Appendix II reviews a variety of mathematical topics and concepts that are used throughout the main text, and Appendix III reviews complex analysis. Therefore, this book is uniquely self-contained.
Statistical Rethinking: A Bayesian Course with Examples in R and Stan builds your knowledge of and confidence in making inferences from data. Reflecting the need for scripting in today's model-based statistics, the book pushes you to perform step-by-step calculations that are usually automated. This unique computational approach ensures that you understand enough of the details to make reasonable choices and interpretations in your own modeling work. The text presents causal inference and generalized linear multilevel models from a simple Bayesian perspective that builds on information theory and maximum entropy. The core material ranges from the basics of regression to advanced multilevel models. It also presents measurement error, missing data, and Gaussian process models for spatial and phylogenetic confounding. The second edition emphasizes the directed acyclic graph (DAG) approach to causal inference, integrating DAGs into many examples. The new edition also contains new material on the design of prior distributions, splines, ordered categorical predictors, social relations models, cross-validation, importance sampling, instrumental variables, and Hamiltonian Monte Carlo. It ends with an entirely new chapter that goes beyond generalized linear modeling, showing how domain-specific scientific models can be built into statistical analyses. Features Integrates working code into the main text Illustrates concepts through worked data analysis examples Emphasizes understanding assumptions and how assumptions are reflected in code Offers more detailed explanations of the mathematics in optional sections Presents examples of using the dagitty R package to analyze causal graphs Provides the rethinking R package on the author's website and on GitHub
The idea that simplicity matters in science is as old as science itself, with the much cited example of Ockham's Razor. A problem with Ockham's Razor is that nearly everybody seems to accept it, but few are able to define its exact meaning and to make it operational in a non-arbitrary way. Using a multidisciplinary perspective including philosophers, mathematicians, econometricians and economists, this monograph examines simplicity by asking six questions: What is meant by simplicity? How is simplicity measured? Is there an optimum trade-off between simplicity and goodness-of-fit? What is the relation between simplicity and empirical modelling? What is the relation between simplicity and prediction? What is the connection between simplicity and convenience?
Economic and financial time series feature important seasonal fluctuations. Despite their regular and predictable patterns over the year, month or week, they pose many challenges to economists and econometricians. This book provides a thorough review of the recent developments in the econometric analysis of seasonal time series. It is designed for an audience of specialists in economic time series analysis and advanced graduate students. It is the most comprehensive and balanced treatment of the subject since the mid-1980s.
Maintaining the innovation capabilities of firms, employees and institutions is a key component for the generation of sustainable growth, employment, and high income in industrial societies. Gaining insights into the German innovation system and the institutional framework is as important to policy making as is data on the endowment of the German economy with factors fostering innovation and their recent development. Germany's Federal Ministry of Education and Research has repeatedly commissioned reports on the competitive strength of the German innovation system since the mid-eighties. The considerable attention that the public and the political, administrative and economic actors have paid to these reports in the past few years proves the strong interest in the assessment of and indicators for the dynamics behind innovation activities. The present study closely follows the pattern of those carried out before. It has been extended, however, to include an extensive discussion on indicators for technological performance and an outline of the key features of the German innovation system.
This book analyzes the institutional underpinnings of East Asia's dynamic growth by exploring the interplay between governance and flexibility. As the challenges of promoting and sustaining economic growth become ever more complex, firms in both advanced and industrializing countries face constant pressures for change from markets and technology. Globalization, heightened competition, and shorter product cycles mean that markets are increasingly volatile and fragmented. To contend with demands for higher quality, quicker delivery, and cost efficiencies, firms must enhance their capability to innovate and diversify. Achieving this flexibility, in turn, often requires new forms of governance arrangements that facilitate the exchange of resources among diverse yet interdependent economic actors. Moving beyond the literature's emphasis on developed economies, this volume emphasizes the relevance of the links between governance and flexibility for understanding East Asia's explosive economic growth over the past quarter century. In case studies that encompass a variety of key industrial sectors and countries, the contributors emphasize the importance of network patterns of governance for facilitating flexibility in firms throughout the region. Their analyses illuminate both the strengths and limitations of recent growth strategies and offer insights into prospects for continued expansion in the wake of the East Asian economic crisis of the late 1990s. Contributions by: Richard P. Appelbaum, Lu-lin Cheng, Stephen W. K. Chiu, Frederic C. Deyo, Richard F. Doner, Dieter Ernst, Eric Hershberg, Tai Lok Lui, Rajah Rasiah, David A. Smith, and Poh-Kam Wong.
Since the first edition of this book was published in 1993, David Hendry's work on econometric methodology has become increasingly influential. In this edition he presents a brand new paper which compellingly explains the logic of his general approach to econometric modelling and describes recent major advances in computer-automated modelling, which establish the success of the proposed strategy. Empirical studies of consumers' expenditure and money demands illustrate the methods in action. The breakthrough presented here will make econometric testing much easier. |
You may like...
Geographical and Fingerprinting Data for…
Jordi Conesa, Antoni Perez-Navarro, …
Paperback
R3,042
Discovery Miles 30 420
CLXI Inner Mountain Thoughts - A Journey…
Byron (Be Positive) Gaskins
Paperback
Fractional Order Systems - Optimization…
Ahmad Taher Azar, Ahmed G Radwan, …
Paperback
|