![]() |
![]() |
Your cart is empty |
||
Books > Business & Economics > Economics > Econometrics > Economic statistics
Originally published in 1939, this book forms the first part of a two-volume series on the mathematics required for the examinations of the Institute of Actuaries, focusing on elementary differential and integral calculus. Miscellaneous examples are included at the end of the text. This book will be of value to anyone with an interest in actuarial science and mathematics.
Random set theory is a fascinating branch of mathematics that amalgamates techniques from topology, convex geometry, and probability theory. Social scientists routinely conduct empirical work with data and modelling assumptions that reveal a set to which the parameter of interest belongs, but not its exact value. Random set theory provides a coherent mathematical framework to conduct identification analysis and statistical inference in this setting and has become a fundamental tool in econometrics and finance. This is the first book dedicated to the use of the theory in econometrics, written to be accessible for readers without a background in pure mathematics. Molchanov and Molinari define the basics of the theory and illustrate the mathematical concepts by their application in the analysis of econometric models. The book includes sets of exercises to accompany each chapter as well as examples to help readers apply the theory effectively.
This volume presents original and up-to-date studies in unobserved components (UC) time series models from both theoretical and methodological perspectives. It also presents empirical studies where the UC time series methodology is adopted. Drawing on the intellectual influence of Andrew Harvey, the work covers three main topics: the theory and methodology for unobserved components time series models; applications of unobserved components time series models; and time series econometrics and estimation and testing. These types of time series models have seen wide application in economics, statistics, finance, climate change, engineering, biostatistics, and sports statistics. The volume effectively provides a key review into relevant research directions for UC time series econometrics and will be of interest to econometricians, time series statisticians, and practitioners (government, central banks, business) in time series analysis and forecasting, as well to researchers and graduate students in statistics, econometrics, and engineering.
This book treats the latest developments in the theory of order-restricted inference, with special attention to nonparametric methods and algorithmic aspects. Among the topics treated are current status and interval censoring models, competing risk models, and deconvolution. Methods of order restricted inference are used in computing maximum likelihood estimators and developing distribution theory for inverse problems of this type. The authors have been active in developing these tools and present the state of the art and the open problems in the field. The earlier chapters provide an introduction to the subject, while the later chapters are written with graduate students and researchers in mathematical statistics in mind. Each chapter ends with a set of exercises of varying difficulty. The theory is illustrated with the analysis of real-life data, which are mostly medical in nature.
The majority of empirical research in economics ignores the potential benefits of nonparametric methods, while the majority of advances in nonparametric theory ignores the problems faced in applied econometrics. This book helps bridge this gap between applied economists and theoretical nonparametric econometricians. It discusses in depth, and in terms that someone with only one year of graduate econometrics can understand, basic to advanced nonparametric methods. The analysis starts with density estimation and motivates the procedures through methods that should be familiar to the reader. It then moves on to kernel regression, estimation with discrete data, and advanced methods such as estimation with panel data and instrumental variables models. The book pays close attention to the issues that arise with programming, computing speed, and application. In each chapter, the methods discussed are applied to actual data, paying attention to presentation of results and potential pitfalls.
This book provides an introduction to index numbers for statisticians, economists and numerate members of the public. It covers the essential basics, mixing theoretical aspects with practical techniques to give a balanced and accessible introduction to the subject. The concepts are illustrated by exploring the construction and use of the Consumer Prices Index which is arguably the most important of all official statistics in the UK. The book also considers current issues and developments in the field including the use of large-scale price transaction data. A Practical Introduction to Index Numbers will be the ideal accompaniment for students taking the index number components of the Royal Statistical Society Ordinary and Higher Certificate exams; it provides suggested routes through the book for students, and sets of exercises with solutions.
'The Number Bias combines vivid storytelling with authoritative analysis to deliver a warning about the way numbers can lead us astray - if we let them.' TIM HARFORD Even if you don't consider yourself a numbers person, you are a numbers person. The time has come to put numbers in their place. Not high up on a pedestal, or out on the curb, but right where they belong: beside words. It is not an overstatement to say that numbers dictate the way we live our lives. They tell us how we're doing at school, how much we weigh, who might win an election and whether the economy is booming. But numbers aren't as objective as they may seem; behind every number is a story. Yet politicians, businesses and the media often forget this - or use it for their own gain. Sanne Blauw travels the world to unpick our relationship with numbers and demystify our misguided allegiance, from Florence Nightingale using statistics to petition for better conditions during the Crimean War to the manipulation of numbers by the American tobacco industry and the ambiguous figures peddled during the EU referendum. Taking us from the everyday numbers that govern our health and wellbeing to the statistics used to wield enormous power and influence, The Number Bias counsels us to think more wisely. 'A beautifully accessible exploration of how numbers shape our lives, and the importance of accurately interpreting the statistics we are fed.' ANGELA SAINI, author of Superior
This edited collection concerns nonlinear economic relations that involve time. It is divided into four broad themes that all reflect the work and methodology of Professor Timo Terasvirta, one of the leading scholars in the field of nonlinear time series econometrics. The themes are: Testing for linearity and functional form, specification testing and estimation of nonlinear time series models in the form of smooth transition models, model selection and econometric methodology, and finally applications within the area of financial econometrics. All these research fields include contributions that represent state of the art in econometrics such as testing for neglected nonlinearity in neural network models, time-varying GARCH and smooth transition models, STAR models and common factors in volatility modeling, semi-automatic general to specific model selection for nonlinear dynamic models, high-dimensional data analysis for parametric and semi-parametric regression models with dependent data, commodity price modeling, financial analysts earnings forecasts based on asymmetric loss function, local Gaussian correlation and dependence for asymmetric return dependence, and the use of bootstrap aggregation to improve forecast accuracy. Each chapter represents original scholarly work, and reflects the intellectual impact that Timo Terasvirta has had and will continue to have, on the profession.
This book allows those with a basic knowledge of econometrics to learn the main nonparametric and semiparametric techniques used in econometric modelling, and how to apply them correctly. It looks at kernel density estimation, kernel regression, splines, wavelets, and mixture models, and provides useful empirical examples throughout. Using empirical application, several economic topics are addressed, including income distribution, wage equation, economic convergence, the Phillips curve, interest rate dynamics, returns volatility, and housing prices. A helpful appendix also explains how to implement the methods using R. This useful book will appeal to practitioners and researchers who need an accessible introduction to nonparametric and semiparametric econometrics. The practical approach provides an overview of the main techniques without including too much focus on mathematical formulas. It also serves as an accompanying textbook for a basic course, typically at undergraduate or graduate level.
Human Development Indices and Indicators: 2018 Statistical Update is being released to ensure consistency in reporting on key human development indices and statistics. It provides a brief overview of the state of human development - snapshots of current conditions as well as long-term trends in human development indicators. It includes a full statistical annex of human development composite indices and indicators across their various dimensions. This update includes the 2017 values and ranking for the HDI and other composite indices as well as current statistics in key areas of human development for use by policymakers, researchers and others in their analytical, planning and policy work. In addition to the standard HDR tables, statistical dashboards are included to draw attention to the relationship between human well-being and five topics: quality of human development, life-course gender gaps, women's empowerment, environmental sustainability and socioeconomic sustainability. Accompanying the statistical annex is an overview of trends in human development, highlighting the considerable progress, but also the persistent deprivations and disparities
Over the last thirty years there has been extensive use of continuous time econometric methods in macroeconomic modelling. This monograph presents a continuous time macroeconometric model of the United Kingdom incorporating stochastic trends. Its development represents a major step forward in continuous time macroeconomic modelling. The book describes the model in detail and, like earlier models, it is designed in such a way as to permit a rigorous mathematical analysis of its steady-state and stability properties, thus providing a valuable check on the capacity of the model to generate plausible long-run behaviour. The model is estimated using newly developed exact Gaussian estimation methods for continuous time econometric models incorporating unobservable stochastic trends. The book also includes discussion of the application of the model to dynamic analysis and forecasting.
This 2004 volume offers a broad overview of developments in the theory and applications of state space modeling. With fourteen chapters from twenty-three contributors, it offers a unique synthesis of state space methods and unobserved component models that are important in a wide range of subjects, including economics, finance, environmental science, medicine and engineering. The book is divided into four sections: introductory papers, testing, Bayesian inference and the bootstrap, and applications. It will give those unfamiliar with state space models a flavour of the work being carried out as well as providing experts with valuable state of the art summaries of different topics. Offering a useful reference for all, this accessible volume makes a significant contribution to the literature of this discipline.
Price and quantity indices are important, much-used measuring instruments, and it is therefore necessary to have a good understanding of their properties. When it was published, this book is the first comprehensive text on index number theory since Irving Fisher's 1922 The Making of Index Numbers. The book covers intertemporal and interspatial comparisons; ratio- and difference-type measures; discrete and continuous time environments; and upper- and lower-level indices. Guided by economic insights, this book develops the instrumental or axiomatic approach. There is no role for behavioural assumptions. In addition to subject matter chapters, two entire chapters are devoted to the rich history of the subject.
This book is intended to provide the reader with a firm conceptual and empirical understanding of basic information-theoretic econometric models and methods. Because most data are observational, practitioners work with indirect noisy observations and ill-posed econometric models in the form of stochastic inverse problems. Consequently, traditional econometric methods in many cases are not applicable for answering many of the quantitative questions that analysts wish to ask. After initial chapters deal with parametric and semiparametric linear probability models, the focus turns to solving nonparametric stochastic inverse problems. In succeeding chapters, a family of power divergence measure likelihood functions are introduced for a range of traditional and nontraditional econometric-model problems. Finally, within either an empirical maximum likelihood or loss context, Ron C. Mittelhammer and George G. Judge suggest a basis for choosing a member of the divergence family.
The fastest, easiest, most comprehensive way to learn Adobe XD CC Classroom in a Book (R), the best-selling series of hands-on software training workbooks, offers what no other book or training program does-an official training series from Adobe, developed with the support of Adobe product experts. Adobe XD CC Classroom in a Book (2018 release) contains 10 lessons that cover the basics and beyond, providing countless tips and techniques to help you become more productive with the program. You can follow the book from start to finish or choose only those lessons that interest you. Purchasing this book includes valuable online extras. Follow the instructions in the book's "Getting Started" section to unlock access to: Downloadable lesson files you need to work through the projects in the book Web Edition containing the complete text of the book, interactive quizzes, videos that walk you through the lessons step by step, and updated material covering new feature releases from Adobe What you need to use this book: Adobe XD CC (2018 release) software, for either Windows or macOS. (Software not included.) Note: Classroom in a Book does not replace the documentation, support, updates, or any other benefits of being a registered owner of Adobe XD CC software.
Published in 1932, this is the third edition of an original 1922 volume. The 1922 volume was, in turn, created as the replacement for the Institute of Actuaries Textbook, Part Three, which was the foremost source of knowledge on the subject of life contingencies for over 35 years. Assuming a high level of mathematical knowledge on the part of the reader, it was aimed chiefly at actuarial students and those with a professional interest in the relationship between statistics and mortality. Highly organised and containing numerous mathematical formulae, this book will remain of value to anyone with an interest in risk calculation and the development of the insurance industry.
Logistic models are widely used in economics and other disciplines and are easily available as part of many statistical software packages. This text for graduates, practitioners and researchers in economics, medicine and statistics, which was originally published in 2003, explains the theory underlying logit analysis and gives a thorough explanation of the technique of estimation. The author has provided many empirical applications as illustrations and worked examples. A large data set - drawn from Dutch car ownership statistics - is provided online for readers to practise the techniques they have learned. Several varieties of logit model have been developed independently in various branches of biology, medicine and other disciplines. This book takes its inspiration from logit analysis as it is practised in economics, but it also pays due attention to developments in these other fields.
This book is a collection of essays written in honor of Professor Peter C. B. Phillips of Yale University by some of his former students. The essays analyze a number of important issues in econometrics, all of which Professor Phillips has directly influenced through his seminal scholarly contribution as well as through his remarkable achievements as a teacher. The essays are organized to cover topics in higher-order asymptotics, deficient instruments, nonstationary, LAD and quantile regression, and nonstationary panels. These topics span both theoretical and applied approaches and are intended for use by professionals and advanced graduate students.
What do we mean by inequality comparisons? If the rich just get
richer and the poor get poorer, the answer might seem easy. But
what if the income distribution changes in a complicated way? Can
we use mathematical or statistical techniques to simplify the
comparison problem in a way that has economic meaning? What does it
mean to measure inequality? Is it similar to National Income? Or a
price index? Is it enough just to work out the Gini coefficient?
This 2005 volume contains the papers presented in honor of the lifelong achievements of Thomas J. Rothenberg on the occasion of his retirement. The authors of the chapters include many of the leading econometricians of our day, and the chapters address topics of current research significance in econometric theory. The chapters cover four themes: identification and efficient estimation in econometrics, asymptotic approximations to the distributions of econometric estimators and tests, inference involving potentially nonstationary time series, such as processes that might have a unit autoregressive root, and nonparametric and semiparametric inference. Several of the chapters provide overviews and treatments of basic conceptual issues, while others advance our understanding of the properties of existing econometric procedures and/or propose others. Specific topics include identification in nonlinear models, inference with weak instruments, tests for nonstationary in time series and panel data, generalized empirical likelihood estimation, and the bootstrap.
Originally published in 2005, Weather Derivative Valuation covers all the meteorological, statistical, financial and mathematical issues that arise in the pricing and risk management of weather derivatives. There are chapters on meteorological data and data cleaning, the modelling and pricing of single weather derivatives, the modelling and valuation of portfolios, the use of weather and seasonal forecasts in the pricing of weather derivatives, arbitrage pricing for weather derivatives, risk management, and the modelling of temperature, wind and precipitation. Specific issues covered in detail include the analysis of uncertainty in weather derivative pricing, time-series modelling of daily temperatures, the creation and use of probabilistic meteorological forecasts and the derivation of the weather derivative version of the Black-Scholes equation of mathematical finance. Written by consultants who work within the weather derivative industry, this book is packed with practical information and theoretical insight into the world of weather derivative pricing.
For one- or-two-semester courses in business statistics. Give students the statistical foundation to hone their analysis skills for real-world decisions Basic Business Statistics helps students see the essential role that statistics will play in their future careers by using examples drawn from all functional areas of real-world business. Guided by principles set forth by ASA's Guidelines for Assessment and Instruction (GAISE) reports and the authors' diverse teaching experiences, the text continues to innovate and improve the way this course is taught to students. The 14th Edition includes new and updated resources and tools to enhance students' understanding, and provides the best framework for learning statistical concepts.
The idea that simplicity matters in science is as old as science itself, with the much cited example of Ockham's Razor, 'entia non sunt multiplicanda praeter necessitatem': entities are not to be multiplied beyond necessity. A problem with Ockham's razor is that nearly everybody seems to accept it, but few are able to define its exact meaning and to make it operational in a non-arbitrary way. Using a multidisciplinary perspective including philosophers, mathematicians, econometricians and economists, this 2002 monograph examines simplicity by asking six questions: what is meant by simplicity? How is simplicity measured? Is there an optimum trade-off between simplicity and goodness-of-fit? What is the relation between simplicity and empirical modelling? What is the relation between simplicity and prediction? What is the connection between simplicity and convenience? The book concludes with reflections on simplicity by Nobel Laureates in Economics.
This book describes the classical axiomatic theories of decision under uncertainty, as well as critiques thereof and alternative theories. It focuses on the meaning of probability, discussing some definitions and surveying their scope of applicability. The behavioral definition of subjective probability serves as a way to present the classical theories, culminating in Savage's theorem. The limitations of this result as a definition of probability lead to two directions - first, similar behavioral definitions of more general theories, such as non-additive probabilities and multiple priors, and second, cognitive derivations based on case-based techniques.
This is a comprehensive source of official statistics for the regions and countries of the UK.It is an official publication of the Office for National Statistics (ONS), therefore providing the most authoritative collection of statistics available. It is updated annually, the type and format of the information constantly evolves to take account of new or revised material and reflects current priorities and initiatives. It contains a wide range of demographic, social, industrial and economic statistics which provide insight into aspects of life within all UK regions. Data is presented clearly in combination of tables, maps and charts providing the ideal tool for researching UK regions.Regional Trends is a comprehensive source of official statistics for the regions and countries of the UK. This edition includes a wide range of demographic, social, industrial and economic statistics, covering aspects of life within all areas of the UK. The data are presented clearly in a combination of tables, maps and charts. |
![]() ![]() You may like...
Post-Conflict Participatory Arts…
Faith Mkwananzi, F. Melis Cin
Paperback
R1,257
Discovery Miles 12 570
The Science and Best Practices of…
Timothy D. Ludwig, Matthew M. Laske
Paperback
R1,073
Discovery Miles 10 730
Intimate Economies - Bodies, Emotions…
Susanne Hofmann, Adi Moreno
Hardcover
R2,508
Discovery Miles 25 080
Bodies, Love, and Faith in the First…
Nancy Christie, Michael Gauvreau
Hardcover
R3,167
Discovery Miles 31 670
|