![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics
Principles of Copula Theory explores the state of the art on copulas and provides you with the foundation to use copulas in a variety of applications. Throughout the book, historical remarks and further readings highlight active research in the field, including new results, streamlined presentations, and new proofs of old results. After covering the essentials of copula theory, the book addresses the issue of modeling dependence among components of a random vector using copulas. It then presents copulas from the point of view of measure theory, compares methods for the approximation of copulas, and discusses the Markov product for 2-copulas. The authors also examine selected families of copulas that possess appealing features from both theoretical and applied viewpoints. The book concludes with in-depth discussions on two generalizations of copulas: quasi- and semi-copulas. Although copulas are not the solution to all stochastic problems, they are an indispensable tool for understanding several problems about stochastic dependence. This book gives you the solid and formal mathematical background to apply copulas to a range of mathematical areas, such as probability, real analysis, measure theory, and algebraic structures.
This volume, edited by Jeffrey Racine, Liangjun Su, and Aman Ullah, contains the latest research on nonparametric and semiparametric econometrics and statistics. These data-driven models seek to replace the "classical " parametric models of the past, which were rigid and often linear. Chapters by leading international econometricians and statisticians highlight the interface between econometrics and statistical methods for nonparametric and semiparametric procedures. They provide a balanced view of new developments in the analysis and modeling of applied sciences with cross-section, time series, panel, and spatial data sets. The major topics of the volume include: the methodology of semiparametric models and special regressor methods; inverse, ill-posed, and well-posed problems; different methodologies related to additive models; sieve regression estimators, nonparametric and semiparametric regression models, and the true error of competing approximate models; support vector machines and their modeling of default probability; series estimation of stochastic processes and some of their applications in Econometrics; identification, estimation, and specification problems in a class of semilinear time series models; nonparametric and semiparametric techniques applied to nonstationary or near nonstationary variables; the estimation of a set of regression equations; and a new approach to the analysis of nonparametric models with exogenous treatment assignment.
There isn't a book currently on the market which focuses on multiple hypotheses testing. - Can be used on a range of course, including social & behavioral sciences, biological sciences, as well as professional researchers. Includes various examples of the multiple hypotheses method in practice in a variety of fields, including: sport and crime.
The behaviour of commodity prices never ceases to marvel economists, financial analysts, industry experts, and policymakers. Unexpected swings in commodity prices used to occur infrequently but have now become a permanent feature of global commodity markets. This book is about modelling commodity price shocks. It is intended to provide insights into the theoretical, conceptual, and empirical modelling of the underlying causes of global commodity price shocks. Three main objectives motivated the writing of this book. First, to provide a variety of modelling frameworks for documenting the frequency and intensity of commodity price shocks. Second, to evaluate existing approaches used for forecasting large movements in future commodity prices. Third, to cover a wide range and aspects of global commodities including currencies, rare-hard-lustrous transition metals, agricultural commodities, energy, and health pandemics. Some attempts have already been made towards modelling commodity price shocks. However, most tend to narrowly focus on a subset of commodity markets, i.e., agricultural commodities market and/or the energy market. In this book, the author moves the needle forward by operationalizing different models, which allow researchers to identify the underlying causes and effects of commodity price shocks. Readers also learn about different commodity price forecasting models. The author presents the topics to readers assuming less prior or specialist knowledge. Thus, the book is accessible to industry analysts, researchers, undergraduate and graduate students in economics and financial economics, academic and professional economists, investors, and financial professionals working in different sectors of the commodity markets. Another advantage of the book's approach is that readers are not only exposed to several innovative modelling techniques to add to their modelling toolbox but are also exposed to diverse empirical applications of the techniques presented.
"A book perfect for this moment" -Katherine M. O'Regan, Former Assistant Secretary, US Department of Housing and Urban Development More than fifty years after the passage of the Fair Housing Act, American cities remain divided along the very same lines that this landmark legislation explicitly outlawed. Keeping Races in Their Places tells the story of these lines-who drew them, why they drew them, where they drew them, and how they continue to circumscribe residents' opportunities to this very day. Weaving together sophisticated statistical analyses of more than a century's worth of data with an engaging, accessible narrative that brings the numbers to life, Keeping Races in Their Places exposes the entrenched effects of redlining on American communities. This one-of-a-kind contribution to the real estate and urban economics literature applies the author's original geographic information systems analyses to historical maps to reveal redlining's causal role in shaping today's cities. Spanning the era from the Great Migration to the Great Recession, Keeping Races in Their Places uncovers the roots of the Black-white wealth gap, the subprime lending crisis, and today's lack of affordable housing in maps created by banks nearly a century ago. Most of all, it offers hope that with the latest scholarly tools we can pinpoint how things went wrong-and what we must do to make them right.
The development of economics changed dramatically during the twentieth century with the emergence of econometrics, macroeconomics and a more scientific approach in general. One of the key individuals in the transformation of economics was Ragnar Frisch, professor at the University of Oslo and the first Nobel Laureate in economics in 1969. He was a co-founder of the Econometric Society in 1930 (after having coined the word econometrics in 1926) and edited the journal Econometrics for twenty-two years. The discovery of the manuscripts of a series of eight lectures given by Frisch at the Henri Poincare Institute in March-April 1933 on The Problems and Methods of Econometrics will enable economists to more fully understand his overall vision of econometrics. This book is a rare exhibition of Frisch's overview on econometrics and is published here in English for the first time. Edited and with an introduction by Olav Bjerkholt and Ariane Dupont-Kieffer, Frisch's eight lectures provide an accessible and astute discussion of econometric issues from philosophical foundations to practical procedures. Concerning the development of economics in the twentieth century and the broader visions about economic science in general and econometrics in particular held by Ragnar Frisch, this book will appeal to anyone with an interest in the history of economics and econometrics.
This book is devoted to biased sampling problems (also called choice-based sampling in Econometrics parlance) and over-identified parameter estimation problems. Biased sampling problems appear in many areas of research, including Medicine, Epidemiology and Public Health, the Social Sciences and Economics. The book addresses a range of important topics, including case and control studies, causal inference, missing data problems, meta-analysis, renewal process and length biased sampling problems, capture and recapture problems, case cohort studies, exponential tilting genetic mixture models etc. The goal of this book is to make it easier for Ph. D students and new researchers to get started in this research area. It will be of interest to all those who work in the health, biological, social and physical sciences, as well as those who are interested in survey methodology and other areas of statistical science, among others.
Advances in Austrian Economics is a research annual whose editorial policy is to publish original research articles on Austrian economics. Each volume attempts to apply the insights of Austrian economics and related approaches to topics that are of current interest in economics and cognate disciplines. Volume 21 exemplifies this focus by highlighting key research from the Austrian tradition of economics with other research traditions in economics and related areas.
This book reflects the state of the art on nonlinear economic dynamics, financial market modelling and quantitative finance. It contains eighteen papers with topics ranging from disequilibrium macroeconomics, monetary dynamics, monopoly, financial market and limit order market models with boundedly rational heterogeneous agents to estimation, time series modelling and empirical analysis and from risk management of interest-rate products, futures price volatility and American option pricing with stochastic volatility to evaluation of risk and derivatives of electricity market. The book illustrates some of the most recent research tools in these areas and will be of interest to economists working in economic dynamics and financial market modelling, to mathematicians who are interested in applying complexity theory to economics and finance and to market practitioners and researchers in quantitative finance interested in limit order, futures and electricity market modelling, derivative pricing and risk management.
This book investigates why economics makes less visible progress over time than scientific fields with a strong practical component, where interactions with physical technologies play a key role. The thesis of the book is that the main impediment to progress in economics is "false feedback", which it defines as the false result of an empirical study, such as empirical evidence produced by a statistical model that violates some of its assumptions. In contrast to scientific fields that work with physical technologies, false feedback is hard to recognize in economics. Economists thus have difficulties knowing where they stand in their inquiries, and false feedback will regularly lead them in the wrong directions. The book searches for the reasons behind the emergence of false feedback. It thereby contributes to a wider discussion in the field of metascience about the practices of researchers when pursuing their daily business. The book thus offers a case study of metascience for the field of empirical economics. The main strength of the book are the numerous smaller insights it provides throughout. The book delves into deep discussions of various theoretical issues, which it illustrates by many applied examples and a wide array of references, especially to philosophy of science. The book puts flesh on complicated and often abstract subjects, particularly when it comes to controversial topics such as p-hacking. The reader gains an understanding of the main challenges present in empirical economic research and also the possible solutions. The main audience of the book are all applied researchers working with data and, in particular, those who have found certain aspects of their research practice problematic.
Much of our thinking is flawed because it is based on faulty intuition. By using the framework and tools of probability and statistics, we can overcome this to provide solutions to many real-world problems and paradoxes. We show how to do this, and find answers that are frequently very contrary to what we might expect. Along the way, we venture into diverse realms and thought experiments which challenge the way that we see the world. Features: An insightful and engaging discussion of some of the key ideas of probabilistic and statistical thinking Many classic and novel problems, paradoxes, and puzzles An exploration of some of the big questions involving the use of choice and reason in an uncertain world The application of probability, statistics, and Bayesian methods to a wide range of subjects, including economics, finance, law, and medicine Exercises, references, and links for those wishing to cross-reference or to probe further Solutions to exercises at the end of the book This book should serve as an invaluable and fascinating resource for university, college, and high school students who wish to extend their reading, as well as for teachers and lecturers who want to liven up their courses while retaining academic rigour. It will also appeal to anyone who wishes to develop skills with numbers or has an interest in the many statistical and other paradoxes that permeate our lives. Indeed, anyone studying the sciences, social sciences, or humanities on a formal or informal basis will enjoy and benefit from this book.
The book provides an integrated approach to risk sharing, risk spreading and efficient regulation through principal agent models. It emphasizes the role of information asymmetry and risk sharing in contracts as an alternative to transaction cost considerations. It examines how contracting, as an institutional mechanism to conduct transactions, spreads risks while attempting consolidation. It further highlights the shifting emphasis in contracts from Coasian transaction cost saving to risk sharing and shows how it creates difficulties associated with risk spreading, and emphasizes the need for efficient regulation of contracts at various levels. Each of the chapters is structured using a principal agent model, and all chapters incorporate adverse selection (and exogenous randomness) as a result of information asymmetry, as well as moral hazard (and endogenous randomness) due to the self-interest-seeking behavior on the part of the participants.
A comprehensive account of economic size distributions around the world and throughout the years In the course of the past 100 years, economists and applied statisticians have developed a remarkably diverse variety of income distribution models, yet no single resource convincingly accounts for all of these models, analyzing their strengths and weaknesses, similarities and differences. Statistical Size Distributions in Economics and Actuarial Sciences is the first collection to systematically investigate a wide variety of parametric models that deal with income, wealth, and related notions. Christian Kleiber and Samuel Kotz survey, compliment, compare, and unify all of the disparate models of income distribution, highlighting at times a lack of coordination between them that can result in unnecessary duplication. Considering models from eight languages and all continents, the authors discuss the social and economic implications of each as well as distributions of size of loss in actuarial applications. Specific models covered include:
Three appendices provide brief biographies of some of the leading players along with the basic properties of each of the distributions. Actuaries, economists, market researchers, social scientists, and physicists interested in econophysics will find Statistical Size Distributions in Economics and Actuarial Sciences to be a truly one-of-a-kind addition to the professional literature.
There isn't a book currently on the market which focuses on multiple hypotheses testing. - Can be used on a range of course, including social & behavioral sciences, biological sciences, as well as professional researchers. Includes various examples of the multiple hypotheses method in practice in a variety of fields, including: sport and crime.
Time Series: A First Course with Bootstrap Starter provides an introductory course on time series analysis that satisfies the triptych of (i) mathematical completeness, (ii) computational illustration and implementation, and (iii) conciseness and accessibility to upper-level undergraduate and M.S. students. Basic theoretical results are presented in a mathematically convincing way, and the methods of data analysis are developed through examples and exercises parsed in R. A student with a basic course in mathematical statistics will learn both how to analyze time series and how to interpret the results. The book provides the foundation of time series methods, including linear filters and a geometric approach to prediction. The important paradigm of ARMA models is studied in-depth, as well as frequency domain methods. Entropy and other information theoretic notions are introduced, with applications to time series modeling. The second half of the book focuses on statistical inference, the fitting of time series models, as well as computational facets of forecasting. Many time series of interest are nonlinear in which case classical inference methods can fail, but bootstrap methods may come to the rescue. Distinctive features of the book are the emphasis on geometric notions and the frequency domain, the discussion of entropy maximization, and a thorough treatment of recent computer-intensive methods for time series such as subsampling and the bootstrap. There are more than 600 exercises, half of which involve R coding and/or data analysis. Supplements include a website with 12 key data sets and all R code for the book's examples, as well as the solutions to exercises.
Introduction to Statistical Decision Theory: Utility Theory and Causal Analysis provides the theoretical background to approach decision theory from a statistical perspective. It covers both traditional approaches, in terms of value theory and expected utility theory, and recent developments, in terms of causal inference. The book is specifically designed to appeal to students and researchers that intend to acquire a knowledge of statistical science based on decision theory. Features Covers approaches for making decisions under certainty, risk, and uncertainty Illustrates expected utility theory and its extensions Describes approaches to elicit the utility function Reviews classical and Bayesian approaches to statistical inference based on decision theory Discusses the role of causal analysis in statistical decision theory
1. This book is applicable to courses across the social and behavioral science on a wide range of quantitative methods courses. 2. The book is based solely on Stata for EFA - one of the top statistics software packages used in behavioral and social sciences. 3. Clear step-by-step guidance combined with screen shots to show how to apply EFA to real data.
This book primarily addresses the optimality aspects of covariate designs. A covariate model is a combination of ANOVA and regression models. Optimal estimation of the parameters of the model using a suitable choice of designs is of great importance; as such choices allow experimenters to extract maximum information for the unknown model parameters. The main emphasis of this monograph is to start with an assumed covariate model in combination with some standard ANOVA set-ups such as CRD, RBD, BIBD, GDD, BTIBD, BPEBD, cross-over, multi-factor, split-plot and strip-plot designs, treatment control designs, etc. and discuss the nature and availability of optimal covariate designs. In some situations, optimal estimations of both ANOVA and the regression parameters are provided. Global optimality and D-optimality criteria are mainly used in selecting the design. The standard optimality results of both discrete and continuous set-ups have been adapted, and several novel combinatorial techniques have been applied for the construction of optimum designs using Hadamard matrices, the Kronecker product, Rao-Khatri product, mixed orthogonal arrays to name a few.
From the Foreword: "Big Data Management and Processing is [a] state-of-the-art book that deals with a wide range of topical themes in the field of Big Data. The book, which probes many issues related to this exciting and rapidly growing field, covers processing, management, analytics, and applications... [It] is a very valuable addition to the literature. It will serve as a source of up-to-date research in this continuously developing area. The book also provides an opportunity for researchers to explore the use of advanced computing technologies and their impact on enhancing our capabilities to conduct more sophisticated studies." ---Sartaj Sahni, University of Florida, USA "Big Data Management and Processing covers the latest Big Data research results in processing, analytics, management and applications. Both fundamental insights and representative applications are provided. This book is a timely and valuable resource for students, researchers and seasoned practitioners in Big Data fields. --Hai Jin, Huazhong University of Science and Technology, China Big Data Management and Processing explores a range of big data related issues and their impact on the design of new computing systems. The twenty-one chapters were carefully selected and feature contributions from several outstanding researchers. The book endeavors to strike a balance between theoretical and practical coverage of innovative problem solving techniques for a range of platforms. It serves as a repository of paradigms, technologies, and applications that target different facets of big data computing systems. The first part of the book explores energy and resource management issues, as well as legal compliance and quality management for Big Data. It covers In-Memory computing and In-Memory data grids, as well as co-scheduling for high performance computing applications. The second part of the book includes comprehensive coverage of Hadoop and Spark, along with security, privacy, and trust challenges and solutions. The latter part of the book covers mining and clustering in Big Data, and includes applications in genomics, hospital big data processing, and vehicular cloud computing. The book also analyzes funding for Big Data projects.
Risk Measures and Insurance Solvency Benchmarks: Fixed-Probability Levels in Renewal Risk Models is written for academics and practitioners who are concerned about potential weaknesses of the Solvency II regulatory system. It is also intended for readers who are interested in pure and applied probability, have a taste for classical and asymptotic analysis, and are motivated to delve into rather intensive calculations. The formal prerequisite for this book is a good background in analysis. The desired prerequisite is some degree of probability training, but someone with knowledge of the classical real-variable theory, including asymptotic methods, will also find this book interesting. For those who find the proofs too complicated, it may be reassuring that most results in this book are formulated in rather elementary terms. This book can also be used as reading material for basic courses in risk measures, insurance mathematics, and applied probability. The material of this book was partly used by the author for his courses in several universities in Moscow, Copenhagen University, and in the University of Montreal. Features Requires only minimal mathematical prerequisites in analysis and probability Suitable for researchers and postgraduate students in related fields Could be used as a supplement to courses in risk measures, insurance mathematics and applied probability.
Much of our thinking is flawed because it is based on faulty intuition. By using the framework and tools of probability and statistics, we can overcome this to provide solutions to many real-world problems and paradoxes. We show how to do this, and find answers that are frequently very contrary to what we might expect. Along the way, we venture into diverse realms and thought experiments which challenge the way that we see the world. Features: An insightful and engaging discussion of some of the key ideas of probabilistic and statistical thinking Many classic and novel problems, paradoxes, and puzzles An exploration of some of the big questions involving the use of choice and reason in an uncertain world The application of probability, statistics, and Bayesian methods to a wide range of subjects, including economics, finance, law, and medicine Exercises, references, and links for those wishing to cross-reference or to probe further Solutions to exercises at the end of the book This book should serve as an invaluable and fascinating resource for university, college, and high school students who wish to extend their reading, as well as for teachers and lecturers who want to liven up their courses while retaining academic rigour. It will also appeal to anyone who wishes to develop skills with numbers or has an interest in the many statistical and other paradoxes that permeate our lives. Indeed, anyone studying the sciences, social sciences, or humanities on a formal or informal basis will enjoy and benefit from this book.
This monograph provides a unified and comprehensive treatment of an order-theoretic fixed point theory in partially ordered sets and its various useful interactions with topological structures. The material progresses systematically, by presenting the preliminaries before moving to more advanced topics. In the treatment of the applications a wide range of mathematical theories and methods from nonlinear analysis and integration theory are applied; an outline of which has been given an appendix chapter to make the book self-contained. Graduate students and researchers in nonlinear analysis, pure and applied mathematics, game theory and mathematical economics will find this book useful.
This book addresses both theoretical developments in and practical applications of econometric techniques to finance-related problems. It includes selected edited outcomes of the International Econometric Conference of Vietnam (ECONVN2018), held at Banking University, Ho Chi Minh City, Vietnam on January 15-16, 2018. Econometrics is a branch of economics that uses mathematical (especially statistical) methods to analyze economic systems, to forecast economic and financial dynamics, and to develop strategies for achieving desirable economic performance. An extremely important part of economics is finances: a financial crisis can bring the whole economy to a standstill and, vice versa, a smart financial policy can dramatically boost economic development. It is therefore crucial to be able to apply mathematical techniques of econometrics to financial problems. Such applications are a growing field, with many interesting results - and an even larger number of challenges and open problems.
This volume comprises the classic articles on methods of identification and estimation of simultaneous equations econometric models. It includes path-breaking contributions by Trygve Haavelmo and Tjalling Koopmans, who founded the subject and received Nobel prizes for their work. It presents original articles that developed and analysed the leading methods for estimating the parameters of simultaneous equations systems: instrumental variables, indirect least squares, generalized least squares, two-stage and three-stage least squares, and maximum likelihood. Many of the articles are not readily accessible to readers in any other form. |
You may like...
Agent-Based Modeling and Network…
Akira Namatame, Shu-Heng Chen
Hardcover
R2,970
Discovery Miles 29 700
Introduction to Computational Economics…
Hans Fehr, Fabian Kindermann
Hardcover
R4,258
Discovery Miles 42 580
Pricing Decisions in the Euro Area - How…
Silvia Fabiani, Claire Loupias, …
Hardcover
R2,160
Discovery Miles 21 600
Design and Analysis of Time Series…
Richard McCleary, David McDowall, …
Hardcover
R3,286
Discovery Miles 32 860
|