![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics
This book investigates why economics makes less visible progress over time than scientific fields with a strong practical component, where interactions with physical technologies play a key role. The thesis of the book is that the main impediment to progress in economics is "false feedback", which it defines as the false result of an empirical study, such as empirical evidence produced by a statistical model that violates some of its assumptions. In contrast to scientific fields that work with physical technologies, false feedback is hard to recognize in economics. Economists thus have difficulties knowing where they stand in their inquiries, and false feedback will regularly lead them in the wrong directions. The book searches for the reasons behind the emergence of false feedback. It thereby contributes to a wider discussion in the field of metascience about the practices of researchers when pursuing their daily business. The book thus offers a case study of metascience for the field of empirical economics. The main strength of the book are the numerous smaller insights it provides throughout. The book delves into deep discussions of various theoretical issues, which it illustrates by many applied examples and a wide array of references, especially to philosophy of science. The book puts flesh on complicated and often abstract subjects, particularly when it comes to controversial topics such as p-hacking. The reader gains an understanding of the main challenges present in empirical economic research and also the possible solutions. The main audience of the book are all applied researchers working with data and, in particular, those who have found certain aspects of their research practice problematic.
Much of our thinking is flawed because it is based on faulty intuition. By using the framework and tools of probability and statistics, we can overcome this to provide solutions to many real-world problems and paradoxes. We show how to do this, and find answers that are frequently very contrary to what we might expect. Along the way, we venture into diverse realms and thought experiments which challenge the way that we see the world. Features: An insightful and engaging discussion of some of the key ideas of probabilistic and statistical thinking Many classic and novel problems, paradoxes, and puzzles An exploration of some of the big questions involving the use of choice and reason in an uncertain world The application of probability, statistics, and Bayesian methods to a wide range of subjects, including economics, finance, law, and medicine Exercises, references, and links for those wishing to cross-reference or to probe further Solutions to exercises at the end of the book This book should serve as an invaluable and fascinating resource for university, college, and high school students who wish to extend their reading, as well as for teachers and lecturers who want to liven up their courses while retaining academic rigour. It will also appeal to anyone who wishes to develop skills with numbers or has an interest in the many statistical and other paradoxes that permeate our lives. Indeed, anyone studying the sciences, social sciences, or humanities on a formal or informal basis will enjoy and benefit from this book.
The book provides an integrated approach to risk sharing, risk spreading and efficient regulation through principal agent models. It emphasizes the role of information asymmetry and risk sharing in contracts as an alternative to transaction cost considerations. It examines how contracting, as an institutional mechanism to conduct transactions, spreads risks while attempting consolidation. It further highlights the shifting emphasis in contracts from Coasian transaction cost saving to risk sharing and shows how it creates difficulties associated with risk spreading, and emphasizes the need for efficient regulation of contracts at various levels. Each of the chapters is structured using a principal agent model, and all chapters incorporate adverse selection (and exogenous randomness) as a result of information asymmetry, as well as moral hazard (and endogenous randomness) due to the self-interest-seeking behavior on the part of the participants.
A comprehensive account of economic size distributions around the world and throughout the years In the course of the past 100 years, economists and applied statisticians have developed a remarkably diverse variety of income distribution models, yet no single resource convincingly accounts for all of these models, analyzing their strengths and weaknesses, similarities and differences. Statistical Size Distributions in Economics and Actuarial Sciences is the first collection to systematically investigate a wide variety of parametric models that deal with income, wealth, and related notions. Christian Kleiber and Samuel Kotz survey, compliment, compare, and unify all of the disparate models of income distribution, highlighting at times a lack of coordination between them that can result in unnecessary duplication. Considering models from eight languages and all continents, the authors discuss the social and economic implications of each as well as distributions of size of loss in actuarial applications. Specific models covered include:
Three appendices provide brief biographies of some of the leading players along with the basic properties of each of the distributions. Actuaries, economists, market researchers, social scientists, and physicists interested in econophysics will find Statistical Size Distributions in Economics and Actuarial Sciences to be a truly one-of-a-kind addition to the professional literature.
There isn't a book currently on the market which focuses on multiple hypotheses testing. - Can be used on a range of course, including social & behavioral sciences, biological sciences, as well as professional researchers. Includes various examples of the multiple hypotheses method in practice in a variety of fields, including: sport and crime.
Time Series: A First Course with Bootstrap Starter provides an introductory course on time series analysis that satisfies the triptych of (i) mathematical completeness, (ii) computational illustration and implementation, and (iii) conciseness and accessibility to upper-level undergraduate and M.S. students. Basic theoretical results are presented in a mathematically convincing way, and the methods of data analysis are developed through examples and exercises parsed in R. A student with a basic course in mathematical statistics will learn both how to analyze time series and how to interpret the results. The book provides the foundation of time series methods, including linear filters and a geometric approach to prediction. The important paradigm of ARMA models is studied in-depth, as well as frequency domain methods. Entropy and other information theoretic notions are introduced, with applications to time series modeling. The second half of the book focuses on statistical inference, the fitting of time series models, as well as computational facets of forecasting. Many time series of interest are nonlinear in which case classical inference methods can fail, but bootstrap methods may come to the rescue. Distinctive features of the book are the emphasis on geometric notions and the frequency domain, the discussion of entropy maximization, and a thorough treatment of recent computer-intensive methods for time series such as subsampling and the bootstrap. There are more than 600 exercises, half of which involve R coding and/or data analysis. Supplements include a website with 12 key data sets and all R code for the book's examples, as well as the solutions to exercises.
Introduction to Statistical Decision Theory: Utility Theory and Causal Analysis provides the theoretical background to approach decision theory from a statistical perspective. It covers both traditional approaches, in terms of value theory and expected utility theory, and recent developments, in terms of causal inference. The book is specifically designed to appeal to students and researchers that intend to acquire a knowledge of statistical science based on decision theory. Features Covers approaches for making decisions under certainty, risk, and uncertainty Illustrates expected utility theory and its extensions Describes approaches to elicit the utility function Reviews classical and Bayesian approaches to statistical inference based on decision theory Discusses the role of causal analysis in statistical decision theory
1. This book is applicable to courses across the social and behavioral science on a wide range of quantitative methods courses. 2. The book is based solely on Stata for EFA - one of the top statistics software packages used in behavioral and social sciences. 3. Clear step-by-step guidance combined with screen shots to show how to apply EFA to real data.
This book primarily addresses the optimality aspects of covariate designs. A covariate model is a combination of ANOVA and regression models. Optimal estimation of the parameters of the model using a suitable choice of designs is of great importance; as such choices allow experimenters to extract maximum information for the unknown model parameters. The main emphasis of this monograph is to start with an assumed covariate model in combination with some standard ANOVA set-ups such as CRD, RBD, BIBD, GDD, BTIBD, BPEBD, cross-over, multi-factor, split-plot and strip-plot designs, treatment control designs, etc. and discuss the nature and availability of optimal covariate designs. In some situations, optimal estimations of both ANOVA and the regression parameters are provided. Global optimality and D-optimality criteria are mainly used in selecting the design. The standard optimality results of both discrete and continuous set-ups have been adapted, and several novel combinatorial techniques have been applied for the construction of optimum designs using Hadamard matrices, the Kronecker product, Rao-Khatri product, mixed orthogonal arrays to name a few.
From the Foreword: "Big Data Management and Processing is [a] state-of-the-art book that deals with a wide range of topical themes in the field of Big Data. The book, which probes many issues related to this exciting and rapidly growing field, covers processing, management, analytics, and applications... [It] is a very valuable addition to the literature. It will serve as a source of up-to-date research in this continuously developing area. The book also provides an opportunity for researchers to explore the use of advanced computing technologies and their impact on enhancing our capabilities to conduct more sophisticated studies." ---Sartaj Sahni, University of Florida, USA "Big Data Management and Processing covers the latest Big Data research results in processing, analytics, management and applications. Both fundamental insights and representative applications are provided. This book is a timely and valuable resource for students, researchers and seasoned practitioners in Big Data fields. --Hai Jin, Huazhong University of Science and Technology, China Big Data Management and Processing explores a range of big data related issues and their impact on the design of new computing systems. The twenty-one chapters were carefully selected and feature contributions from several outstanding researchers. The book endeavors to strike a balance between theoretical and practical coverage of innovative problem solving techniques for a range of platforms. It serves as a repository of paradigms, technologies, and applications that target different facets of big data computing systems. The first part of the book explores energy and resource management issues, as well as legal compliance and quality management for Big Data. It covers In-Memory computing and In-Memory data grids, as well as co-scheduling for high performance computing applications. The second part of the book includes comprehensive coverage of Hadoop and Spark, along with security, privacy, and trust challenges and solutions. The latter part of the book covers mining and clustering in Big Data, and includes applications in genomics, hospital big data processing, and vehicular cloud computing. The book also analyzes funding for Big Data projects.
Risk Measures and Insurance Solvency Benchmarks: Fixed-Probability Levels in Renewal Risk Models is written for academics and practitioners who are concerned about potential weaknesses of the Solvency II regulatory system. It is also intended for readers who are interested in pure and applied probability, have a taste for classical and asymptotic analysis, and are motivated to delve into rather intensive calculations. The formal prerequisite for this book is a good background in analysis. The desired prerequisite is some degree of probability training, but someone with knowledge of the classical real-variable theory, including asymptotic methods, will also find this book interesting. For those who find the proofs too complicated, it may be reassuring that most results in this book are formulated in rather elementary terms. This book can also be used as reading material for basic courses in risk measures, insurance mathematics, and applied probability. The material of this book was partly used by the author for his courses in several universities in Moscow, Copenhagen University, and in the University of Montreal. Features Requires only minimal mathematical prerequisites in analysis and probability Suitable for researchers and postgraduate students in related fields Could be used as a supplement to courses in risk measures, insurance mathematics and applied probability.
Much of our thinking is flawed because it is based on faulty intuition. By using the framework and tools of probability and statistics, we can overcome this to provide solutions to many real-world problems and paradoxes. We show how to do this, and find answers that are frequently very contrary to what we might expect. Along the way, we venture into diverse realms and thought experiments which challenge the way that we see the world. Features: An insightful and engaging discussion of some of the key ideas of probabilistic and statistical thinking Many classic and novel problems, paradoxes, and puzzles An exploration of some of the big questions involving the use of choice and reason in an uncertain world The application of probability, statistics, and Bayesian methods to a wide range of subjects, including economics, finance, law, and medicine Exercises, references, and links for those wishing to cross-reference or to probe further Solutions to exercises at the end of the book This book should serve as an invaluable and fascinating resource for university, college, and high school students who wish to extend their reading, as well as for teachers and lecturers who want to liven up their courses while retaining academic rigour. It will also appeal to anyone who wishes to develop skills with numbers or has an interest in the many statistical and other paradoxes that permeate our lives. Indeed, anyone studying the sciences, social sciences, or humanities on a formal or informal basis will enjoy and benefit from this book.
This monograph provides a unified and comprehensive treatment of an order-theoretic fixed point theory in partially ordered sets and its various useful interactions with topological structures. The material progresses systematically, by presenting the preliminaries before moving to more advanced topics. In the treatment of the applications a wide range of mathematical theories and methods from nonlinear analysis and integration theory are applied; an outline of which has been given an appendix chapter to make the book self-contained. Graduate students and researchers in nonlinear analysis, pure and applied mathematics, game theory and mathematical economics will find this book useful.
This book addresses both theoretical developments in and practical applications of econometric techniques to finance-related problems. It includes selected edited outcomes of the International Econometric Conference of Vietnam (ECONVN2018), held at Banking University, Ho Chi Minh City, Vietnam on January 15-16, 2018. Econometrics is a branch of economics that uses mathematical (especially statistical) methods to analyze economic systems, to forecast economic and financial dynamics, and to develop strategies for achieving desirable economic performance. An extremely important part of economics is finances: a financial crisis can bring the whole economy to a standstill and, vice versa, a smart financial policy can dramatically boost economic development. It is therefore crucial to be able to apply mathematical techniques of econometrics to financial problems. Such applications are a growing field, with many interesting results - and an even larger number of challenges and open problems.
This volume comprises the classic articles on methods of identification and estimation of simultaneous equations econometric models. It includes path-breaking contributions by Trygve Haavelmo and Tjalling Koopmans, who founded the subject and received Nobel prizes for their work. It presents original articles that developed and analysed the leading methods for estimating the parameters of simultaneous equations systems: instrumental variables, indirect least squares, generalized least squares, two-stage and three-stage least squares, and maximum likelihood. Many of the articles are not readily accessible to readers in any other form.
Advances in Econometrics is a research annual whose editorial policy is to publish original research articles that contain enough details so that economists and econometricians who are not experts in the topics will find them accessible and useful in their research. Volume 37 exemplifies this focus by highlighting key research from new developments in econometrics.
This book presents selected peer-reviewed contributions from the International Work-Conference on Time Series, ITISE 2017, held in Granada, Spain, September 18-20, 2017. It discusses topics in time series analysis and forecasting, including advanced mathematical methodology, computational intelligence methods for time series, dimensionality reduction and similarity measures, econometric models, energy time series forecasting, forecasting in real problems, online learning in time series as well as high-dimensional and complex/big data time series. The series of ITISE conferences provides a forum for scientists, engineers, educators and students to discuss the latest ideas and implementations in the foundations, theory, models and applications in the field of time series analysis and forecasting. It focuses on interdisciplinary and multidisciplinary research encompassing computer science, mathematics, statistics and econometrics.
This book explores Latin American inequality broadly in terms of its impact on the region's development and specifically with two country studies from Peru on earnings inequality and child labor as a consequence of inequality for child labor. The first chapter provides substantial recent undated analysis of the critical thesis of deindustrialization for Latin America. The second chapter provides an approach to measuring labor market discrimination that departs from the current treatment of unobservable influences in the literature. The third chapter examines a much-neglected topic of child labor using a panel data set specifically on children. The book is appropriate for courses on economic development and labor economics and for anyone interested in inequality, development and applied econometrics.
* Includes many mathematical examples and problems for students to work directly with both standard and nonstandard models of behaviour to develop problem-solving and critical-thinking skills which are more valuable to students than memorizing content which will quickly be forgotten. * The applications explored in the text emphasise issues of inequality, social mobility, culture and poverty to demonstrate the impact of behavioral economics in areas which students are most passionate about. * The text has a standardized structure (6 parts, 3 chapters in each) which provides a clear and consistent roadmap for students taking the course.
The second edition of this widely acclaimed text presents a thoroughly up-to-date intuitive account of recent developments in econometrics. It continues to present the frontiers of research in an accessible form for non-specialist econometricians, advanced undergraduates and graduate students wishing to carry out applied econometric research. This new edition contains substantially revised chapters on cointegration and vector autoregressive (VAR) modelling, reflecting the developments that have been made in these important areas since the first edition. Special attention is given to the Dickey-Pantula approach and the testing for the order of integration of a variable in the presence of a structural break. For VAR models, impulse response analysis is explained and illustrated. There is also a detailed but intuitive explanation of the Johansen method, an increasingly popular technique. The text contains specially constructed and original tables of critical values for a wide range of tests for stationarity and cointegration. These tables are for Dickey-Fuller tests, Dickey-Hasza-Fuller and HEGY seasonal integration tests and the Perron 'additive outlier' integration test.
The book has been tested and refined through years of classroom teaching experience. With an abundance of examples, problems, and fully worked out solutions, the text introduces the financial theory and relevant mathematical methods in a mathematically rigorous yet engaging way. This textbook provides complete coverage of discrete-time financial models that form the cornerstones of financial derivative pricing theory. Unlike similar texts in the field, this one presents multiple problem-solving approaches, linking related comprehensive techniques for pricing different types of financial derivatives. Key features: In-depth coverage of discrete-time theory and methodology. Numerous, fully worked out examples and exercises in every chapter. Mathematically rigorous and consistent yet bridging various basic and more advanced concepts. Judicious balance of financial theory, mathematical, and computational methods. Guide to Material. This revision contains: Almost 200 pages worth of new material in all chapters. A new chapter on elementary probability theory. An expanded the set of solved problems and additional exercises. Answers to all exercises. This book is a comprehensive, self-contained, and unified treatment of the main theory and application of mathematical methods behind modern-day financial mathematics. Table of Contents List of Figures and Tables Preface I Introduction to Pricing and Management of Financial Securities 1 Mathematics of Compounding 2 Primer on Pricing Risky Securities 3 Portfolio Management 4 Primer on Derivative Securities II Discrete-Time Modelling 5 Single-Period Arrow-Debreu Models 6 Introduction to Discrete-Time Stochastic Calculus 7 Replication and Pricing in the Binomial Tree Model 8 General Multi-Asset Multi-Period Model Appendices A Elementary Probability Theory B Glossary of Symbols and Abbreviations C Answers and Hints to Exercises References Index Biographies Giuseppe Campolieti is Professor of Mathematics at Wilfrid Laurier University in Waterloo, Canada. He has been Natural Sciences and Engineering Research Council postdoctoral research fellow and university research fellow at the University of Toronto. In 1998, he joined the Masters in Mathematical Finance as an instructor and later as an adjunct professor in financial mathematics until 2002. Dr. Campolieti also founded a financial software and consulting company in 1998. He joined Laurier in 2002 as Associate Professor of Mathematics and as SHARCNET Chair in Financial Mathematics. Roman N. Makarov is Associate Professor and Chair of Mathematics at Wilfrid Laurier University. Prior to joining Laurier in 2003, he was an Assistant Professor of Mathematics at Siberian State University of Telecommunications and Informatics and a senior research fellow at the Laboratory of Monte Carlo Methods at the Institute of Computational Mathematics and Mathematical Geophysics in Novosibirsk, Russia.
Modelling Spatial and Spatial-Temporal Data: A Bayesian Approach is aimed at statisticians and quantitative social, economic and public health students and researchers who work with small-area spatial and spatial-temporal data. It assumes a grounding in statistical theory up to the standard linear regression model. The book compares both hierarchical and spatial econometric modelling, providing both a reference and a teaching text with exercises in each chapter. The book provides a fully Bayesian, self-contained, treatment of the underlying statistical theory, with chapters dedicated to substantive applications. The book includes WinBUGS code and R code and all datasets are available online. Part I covers fundamental issues arising when modelling spatial and spatial-temporal data. Part II focuses on modelling cross-sectional spatial data and begins by describing exploratory methods that help guide the modelling process. There are then two theoretical chapters on Bayesian models and a chapter of applications. Two chapters follow on spatial econometric modelling, one describing different models, the other substantive applications. Part III discusses modelling spatial-temporal data, first introducing models for time series data. Exploratory methods for detecting different types of space-time interaction are presented, followed by two chapters on the theory of space-time separable (without space-time interaction) and inseparable (with space-time interaction) models. An applications chapter includes: the evaluation of a policy intervention; analysing the temporal dynamics of crime hotspots; chronic disease surveillance; and testing for evidence of spatial spillovers in the spread of an infectious disease. A final chapter suggests some future directions and challenges. Robert Haining is Emeritus Professor in Human Geography, University of Cambridge, England. He is the author of Spatial Data Analysis in the Social and Environmental Sciences (1990) and Spatial Data Analysis: Theory and Practice (2003). He is a Fellow of the RGS-IBG and of the Academy of Social Sciences. Guangquan Li is Senior Lecturer in Statistics in the Department of Mathematics, Physics and Electrical Engineering, Northumbria University, Newcastle, England. His research includes the development and application of Bayesian methods in the social and health sciences. He is a Fellow of the Royal Statistical Society.
Volume 36 of Advances in Econometrics recognizes Aman Ullah's significant contributions in many areas of econometrics and celebrates his long productive career. The volume features original papers on the theory and practice of econometrics that is related to the work of Aman Ullah. Topics include nonparametric/semiparametric econometrics; finite sample econometrics; shrinkage methods; information/entropy econometrics; model specification testing; robust inference; panel/spatial models. Advances in Econometrics is a research annual whose editorial policy is to publish original research articles that contain enough details so that economists and econometricians who are not experts in the topics will find them accessible and useful in their research.
The composition of portfolios is one of the most fundamental and important methods in financial engineering, used to control the risk of investments. This book provides a comprehensive overview of statistical inference for portfolios and their various applications. A variety of asset processes are introduced, including non-Gaussian stationary processes, nonlinear processes, non-stationary processes, and the book provides a framework for statistical inference using local asymptotic normality (LAN). The approach is generalized for portfolio estimation, so that many important problems can be covered. This book can primarily be used as a reference by researchers from statistics, mathematics, finance, econometrics, and genomics. It can also be used as a textbook by senior undergraduate and graduate students in these fields.
The advent of "Big Data" has brought with it a rapid diversification of data sources, requiring analysis that accounts for the fact that these data have often been generated and recorded for different reasons. Data integration involves combining data residing in different sources to enable statistical inference, or to generate new statistical data for purposes that cannot be served by each source on its own. This can yield significant gains for scientific as well as commercial investigations. However, valid analysis of such data should allow for the additional uncertainty due to entity ambiguity, whenever it is not possible to state with certainty that the integrated source is the target population of interest. Analysis of Integrated Data aims to provide a solid theoretical basis for this statistical analysis in three generic settings of entity ambiguity: statistical analysis of linked datasets that may contain linkage errors; datasets created by a data fusion process, where joint statistical information is simulated using the information in marginal data from non-overlapping sources; and estimation of target population size when target units are either partially or erroneously covered in each source. Covers a range of topics under an overarching perspective of data integration. Focuses on statistical uncertainty and inference issues arising from entity ambiguity. Features state of the art methods for analysis of integrated data. Identifies the important themes that will define future research and teaching in the statistical analysis of integrated data. Analysis of Integrated Data is aimed primarily at researchers and methodologists interested in statistical methods for data from multiple sources, with a focus on data analysts in the social sciences, and in the public and private sectors. |
You may like...
Agent-Based Modeling and Network…
Akira Namatame, Shu-Heng Chen
Hardcover
R2,970
Discovery Miles 29 700
Introduction to Computational Economics…
Hans Fehr, Fabian Kindermann
Hardcover
R4,258
Discovery Miles 42 580
Pricing Decisions in the Euro Area - How…
Silvia Fabiani, Claire Loupias, …
Hardcover
R2,160
Discovery Miles 21 600
Design and Analysis of Time Series…
Richard McCleary, David McDowall, …
Hardcover
R3,286
Discovery Miles 32 860
|