![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > Economic statistics
This book reviews the three most popular methods (and their extensions) in applied economics and other social sciences: matching, regression discontinuity, and difference in differences. The book introduces the underlying econometric/statistical ideas, shows what is identified and how the identified parameters are estimated, and then illustrates how they are applied with real empirical examples. The book emphasizes how to implement the three methods with data: many data and programs are provided in the online appendix. All readers--theoretical econometricians/statisticians, applied economists/social-scientists and researchers/students--will find something useful in the book from different perspectives.
Originally published in 1931, this book was written to provide actuarial students with a guide to mathematics, with information on elementary trigonometry, finite differences, summation, differential and integral calculus, and probability. Examples are included throughout. This book will be of value to anyone with an interest in actuarial practice and its relationship with aspects of mathematics.
Originally published in 1930, this book was formed from the content of three lectures delivered at London University during March of that year. The text provides a concise discussion of the relationship between theoretical statistics and actuarial science. This book will be of value to anyone with an interest in the actuarial profession, statistics and the history of finance.
Originally published in 1939, this book forms the first part of a two-volume series on the mathematics required for the examinations of the Institute of Actuaries, focusing on elementary differential and integral calculus. Miscellaneous examples are included at the end of the text. This book will be of value to anyone with an interest in actuarial science and mathematics.
Denmark is set to achieve 100 per cent renewable energy by 2030. Iceland has topped the gender equality rankings for a decade and counting. South Korea’s average life expectancy will soon reach ninety. How have these places achieved such remarkable outcomes? And how can we apply those lessons to our own communities? The future we want is already here - it's just not evenly distributed. By bringing together for the first time tried and tested solutions to society's most pressing problems, from violence to inequality, Andrew Wear shows that the world we want to live in is already within reach. Solved is a much-needed dose of optimism in an atmosphere of doom and gloom. Informative, accessible and revelatory, it is a celebration of the power of human ingenuity to make the future brighter for everyone.
Originally published in 1932, as part of the Institute of Actuaries Students' Society's Consolidation of Reading Series, this book was written to provide actuarial students with a guide 'to bridging the gap between the strict mathematics of life contingencies and the severely practical problems of Life Office Valuations'. This book will be of value to anyone with an interest in the actuarial profession and the history of finance.
This volume presents original and up-to-date studies in unobserved components (UC) time series models from both theoretical and methodological perspectives. It also presents empirical studies where the UC time series methodology is adopted. Drawing on the intellectual influence of Andrew Harvey, the work covers three main topics: the theory and methodology for unobserved components time series models; applications of unobserved components time series models; and time series econometrics and estimation and testing. These types of time series models have seen wide application in economics, statistics, finance, climate change, engineering, biostatistics, and sports statistics. The volume effectively provides a key review into relevant research directions for UC time series econometrics and will be of interest to econometricians, time series statisticians, and practitioners (government, central banks, business) in time series analysis and forecasting, as well to researchers and graduate students in statistics, econometrics, and engineering.
Jeder Kredit birgt fur den Kreditgeber ein Risiko, da unsicher ist, ob der Kreditnehmer seinen Zahlungsverpflichtungen nachkommen wird. Gemessen wird dieses Kreditrisiko mit Hilfe statistischer Methoden. Vor dem Hintergrund Basel II hat die Kreditrisikomessung an Bedeutung gewonnen. Dieses Buch schliesst die Lucke zwischen statistischer Grundlagenliteratur und mathematisch anspruchsvollen Werken. Es bietet einen Einstieg in die Kreditrisikomessung und die dafur notwendige Statistik. Ausgehend von den wichtigsten Begriffen zum Kreditrisiko werden deren statistische Analoga beschrieben. Enthalten sind relevante statistische Verteilungen und eine Einfuhrung in stochastische Prozesse, Portfoliomodelle und Score- bzw. Ratingmodelle. Zahlreiche praxisnahe Beispiele ermoeglichen den idealen Einstieg fur Praktiker und Quereinsteiger.
This book treats the latest developments in the theory of order-restricted inference, with special attention to nonparametric methods and algorithmic aspects. Among the topics treated are current status and interval censoring models, competing risk models, and deconvolution. Methods of order restricted inference are used in computing maximum likelihood estimators and developing distribution theory for inverse problems of this type. The authors have been active in developing these tools and present the state of the art and the open problems in the field. The earlier chapters provide an introduction to the subject, while the later chapters are written with graduate students and researchers in mathematical statistics in mind. Each chapter ends with a set of exercises of varying difficulty. The theory is illustrated with the analysis of real-life data, which are mostly medical in nature.
This Study Guide accompanies Statistics for Business and Financial Economics, 3rd Ed. (Springer, 2013), which is the most definitive Business Statistics book to use Finance, Economics, and Accounting data throughout the entire book. The Study Guide contains unique chapter reviews for each chapter in the textbook, formulas, examples and additional exercises to enhance topics and their application. Solutions are included so students can evaluate their own understanding of the material. With more real-life data sets than the other books on the market, this study guide and the textbook that it accompanies, give readers all the tools they need to learn material in class and on their own. It is immediately applicable to facing uncertainty and the science of good decision making in financial analysis, econometrics, auditing, production and operations, and marketing research. Data that is analyzed may be collected by companies in the course of their business or by governmental agencies. Students in business degree programs will find this material particularly useful to their other courses and future work.
The case-control approach is a powerful method for investigating factors that may explain a particular event. It is extensively used in epidemiology to study disease incidence, one of the best-known examples being Bradford Hill and Doll's investigation of the possible connection between cigarette smoking and lung cancer. More recently, case-control studies have been increasingly used in other fields, including sociology and econometrics. With a particular focus on statistical analysis, this book is ideal for applied and theoretical statisticians wanting an up-to-date introduction to the field. It covers the fundamentals of case-control study design and analysis as well as more recent developments, including two-stage studies, case-only studies and methods for case-control sampling in time. The latter have important applications in large prospective cohorts which require case-control sampling designs to make efficient use of resources. More theoretical background is provided in an appendix for those new to the field.
This edited collection concerns nonlinear economic relations that involve time. It is divided into four broad themes that all reflect the work and methodology of Professor Timo Terasvirta, one of the leading scholars in the field of nonlinear time series econometrics. The themes are: Testing for linearity and functional form, specification testing and estimation of nonlinear time series models in the form of smooth transition models, model selection and econometric methodology, and finally applications within the area of financial econometrics. All these research fields include contributions that represent state of the art in econometrics such as testing for neglected nonlinearity in neural network models, time-varying GARCH and smooth transition models, STAR models and common factors in volatility modeling, semi-automatic general to specific model selection for nonlinear dynamic models, high-dimensional data analysis for parametric and semi-parametric regression models with dependent data, commodity price modeling, financial analysts earnings forecasts based on asymmetric loss function, local Gaussian correlation and dependence for asymmetric return dependence, and the use of bootstrap aggregation to improve forecast accuracy. Each chapter represents original scholarly work, and reflects the intellectual impact that Timo Terasvirta has had and will continue to have, on the profession.
Human Development Indices and Indicators: 2018 Statistical Update is being released to ensure consistency in reporting on key human development indices and statistics. It provides a brief overview of the state of human development - snapshots of current conditions as well as long-term trends in human development indicators. It includes a full statistical annex of human development composite indices and indicators across their various dimensions. This update includes the 2017 values and ranking for the HDI and other composite indices as well as current statistics in key areas of human development for use by policymakers, researchers and others in their analytical, planning and policy work. In addition to the standard HDR tables, statistical dashboards are included to draw attention to the relationship between human well-being and five topics: quality of human development, life-course gender gaps, women's empowerment, environmental sustainability and socioeconomic sustainability. Accompanying the statistical annex is an overview of trends in human development, highlighting the considerable progress, but also the persistent deprivations and disparities
This 2004 volume offers a broad overview of developments in the theory and applications of state space modeling. With fourteen chapters from twenty-three contributors, it offers a unique synthesis of state space methods and unobserved component models that are important in a wide range of subjects, including economics, finance, environmental science, medicine and engineering. The book is divided into four sections: introductory papers, testing, Bayesian inference and the bootstrap, and applications. It will give those unfamiliar with state space models a flavour of the work being carried out as well as providing experts with valuable state of the art summaries of different topics. Offering a useful reference for all, this accessible volume makes a significant contribution to the literature of this discipline.
Probability and Bayesian Modeling is an introduction to probability and Bayesian thinking for undergraduate students with a calculus background. The first part of the book provides a broad view of probability including foundations, conditional probability, discrete and continuous distributions, and joint distributions. Statistical inference is presented completely from a Bayesian perspective. The text introduces inference and prediction for a single proportion and a single mean from Normal sampling. After fundamentals of Markov Chain Monte Carlo algorithms are introduced, Bayesian inference is described for hierarchical and regression models including logistic regression. The book presents several case studies motivated by some historical Bayesian studies and the authors' research. This text reflects modern Bayesian statistical practice. Simulation is introduced in all the probability chapters and extensively used in the Bayesian material to simulate from the posterior and predictive distributions. One chapter describes the basic tenets of Metropolis and Gibbs sampling algorithms; however several chapters introduce the fundamentals of Bayesian inference for conjugate priors to deepen understanding. Strategies for constructing prior distributions are described in situations when one has substantial prior information and for cases where one has weak prior knowledge. One chapter introduces hierarchical Bayesian modeling as a practical way of combining data from different groups. There is an extensive discussion of Bayesian regression models including the construction of informative priors, inference about functions of the parameters of interest, prediction, and model selection. The text uses JAGS (Just Another Gibbs Sampler) as a general-purpose computational method for simulating from posterior distributions for a variety of Bayesian models. An R package ProbBayes is available containing all of the book datasets and special functions for illustrating concepts from the book. A complete solutions manual is available for instructors who adopt the book in the Additional Resources section.
Over the last thirty years there has been extensive use of continuous time econometric methods in macroeconomic modelling. This monograph presents a continuous time macroeconometric model of the United Kingdom incorporating stochastic trends. Its development represents a major step forward in continuous time macroeconomic modelling. The book describes the model in detail and, like earlier models, it is designed in such a way as to permit a rigorous mathematical analysis of its steady-state and stability properties, thus providing a valuable check on the capacity of the model to generate plausible long-run behaviour. The model is estimated using newly developed exact Gaussian estimation methods for continuous time econometric models incorporating unobservable stochastic trends. The book also includes discussion of the application of the model to dynamic analysis and forecasting.
Price and quantity indices are important, much-used measuring instruments, and it is therefore necessary to have a good understanding of their properties. When it was published, this book is the first comprehensive text on index number theory since Irving Fisher's 1922 The Making of Index Numbers. The book covers intertemporal and interspatial comparisons; ratio- and difference-type measures; discrete and continuous time environments; and upper- and lower-level indices. Guided by economic insights, this book develops the instrumental or axiomatic approach. There is no role for behavioural assumptions. In addition to subject matter chapters, two entire chapters are devoted to the rich history of the subject.
Kapitalmarktorientierte Unternehmen in der EU haben ihre Konzernabschlusse seit 2005 nach den International Financial Reporting Standards (IFRS) zu erstellen. Ziel ist es, die Rechnungslegungsadressaten mit hochwertigen Informationen uber die wirtschaftliche Lage zu versorgen. Fraglich ist jedoch, ob die IFRS-Einfuhrung dies allein erreichen kann. So wird die Rechnungslegungspraxis nicht nur von Normen, sondern auch von institutionellen Faktoren beeinflusst. Anhand ausgewahlter Eigenschaften von Ergebnisgroessen untersucht der Autor vor diesem Hintergrund, inwiefern die Umstellung auf die IFRS eine qualitative Veranderung der Rechnungslegungspraxis in ausgewahlten Landern der EU erkennen lasst. Daran anknupfend geht er der Frage nach, inwiefern bestimmte Unternehmen besondere Anreize zu einer hochwertigen IFRS-Rechnungslegung haben.
This book is intended to provide the reader with a firm conceptual and empirical understanding of basic information-theoretic econometric models and methods. Because most data are observational, practitioners work with indirect noisy observations and ill-posed econometric models in the form of stochastic inverse problems. Consequently, traditional econometric methods in many cases are not applicable for answering many of the quantitative questions that analysts wish to ask. After initial chapters deal with parametric and semiparametric linear probability models, the focus turns to solving nonparametric stochastic inverse problems. In succeeding chapters, a family of power divergence measure likelihood functions are introduced for a range of traditional and nontraditional econometric-model problems. Finally, within either an empirical maximum likelihood or loss context, Ron C. Mittelhammer and George G. Judge suggest a basis for choosing a member of the divergence family.
Published in 1932, this is the third edition of an original 1922 volume. The 1922 volume was, in turn, created as the replacement for the Institute of Actuaries Textbook, Part Three, which was the foremost source of knowledge on the subject of life contingencies for over 35 years. Assuming a high level of mathematical knowledge on the part of the reader, it was aimed chiefly at actuarial students and those with a professional interest in the relationship between statistics and mortality. Highly organised and containing numerous mathematical formulae, this book will remain of value to anyone with an interest in risk calculation and the development of the insurance industry.
A new chapter on univariate volatility models A revised chapter on linear time series models A new section on multivariate volatility models A new section on regime switching models Many new worked examples, with R code integrated into the text
Logistic models are widely used in economics and other disciplines and are easily available as part of many statistical software packages. This text for graduates, practitioners and researchers in economics, medicine and statistics, which was originally published in 2003, explains the theory underlying logit analysis and gives a thorough explanation of the technique of estimation. The author has provided many empirical applications as illustrations and worked examples. A large data set - drawn from Dutch car ownership statistics - is provided online for readers to practise the techniques they have learned. Several varieties of logit model have been developed independently in various branches of biology, medicine and other disciplines. This book takes its inspiration from logit analysis as it is practised in economics, but it also pays due attention to developments in these other fields.
The fastest, easiest, most comprehensive way to learn Adobe XD CC Classroom in a Book (R), the best-selling series of hands-on software training workbooks, offers what no other book or training program does-an official training series from Adobe, developed with the support of Adobe product experts. Adobe XD CC Classroom in a Book (2018 release) contains 10 lessons that cover the basics and beyond, providing countless tips and techniques to help you become more productive with the program. You can follow the book from start to finish or choose only those lessons that interest you. Purchasing this book includes valuable online extras. Follow the instructions in the book's "Getting Started" section to unlock access to: Downloadable lesson files you need to work through the projects in the book Web Edition containing the complete text of the book, interactive quizzes, videos that walk you through the lessons step by step, and updated material covering new feature releases from Adobe What you need to use this book: Adobe XD CC (2018 release) software, for either Windows or macOS. (Software not included.) Note: Classroom in a Book does not replace the documentation, support, updates, or any other benefits of being a registered owner of Adobe XD CC software.
What do we mean by inequality comparisons? If the rich just get
richer and the poor get poorer, the answer might seem easy. But
what if the income distribution changes in a complicated way? Can
we use mathematical or statistical techniques to simplify the
comparison problem in a way that has economic meaning? What does it
mean to measure inequality? Is it similar to National Income? Or a
price index? Is it enough just to work out the Gini coefficient?
This book is a collection of essays written in honor of Professor Peter C. B. Phillips of Yale University by some of his former students. The essays analyze a number of important issues in econometrics, all of which Professor Phillips has directly influenced through his seminal scholarly contribution as well as through his remarkable achievements as a teacher. The essays are organized to cover topics in higher-order asymptotics, deficient instruments, nonstationary, LAD and quantile regression, and nonstationary panels. These topics span both theoretical and applied approaches and are intended for use by professionals and advanced graduate students. |
You may like...
Sandtray Therapy - A Practical Manual
Linda E. Homeyer, Daniel S Sweeney
Paperback
R1,050
Discovery Miles 10 500
Working with Dreams and PTSD Nightmares…
Jacquie E. Lewis, Stanley Krippner
Hardcover
R1,899
Discovery Miles 18 990
Hiking Beyond Cape Town - 40 Inspiring…
Nina du Plessis, Willie Olivier
Paperback
|