![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics
Portfolio theory and much of asset pricing, as well as many empirical applications, depend on the use of multivariate probability distributions to describe asset returns. Traditionally, this has meant the multivariate normal (or Gaussian) distribution. More recently, theoretical and empirical work in financial economics has employed the multivariate Student (and other) distributions which are members of the elliptically symmetric class. There is also a growing body of work which is based on skew-elliptical distributions. These probability models all exhibit the property that the marginal distributions differ only by location and scale parameters or are restrictive in other respects. Very often, such models are not supported by the empirical evidence that the marginal distributions of asset returns can differ markedly. Copula theory is a branch of statistics which provides powerful methods to overcome these shortcomings. This book provides a synthesis of the latest research in the area of copulae as applied to finance and related subjects such as insurance. Multivariate non-Gaussian dependence is a fact of life for many problems in financial econometrics. This book describes the state of the art in tools required to deal with these observed features of financial data. This book was originally published as a special issue of the European Journal of Finance.
This book discusses market microstructure environment within the context of the global financial crisis. In the first part, the market microstructure theory is recalled and the main microstructure models and hypotheses are discussed. The second part focuses on the main effects of the financial downturn through an examination of market microstructure dynamics. In particular, the effects of market imperfections and the limitations associated with microstructure models are discussed. Finally, the new regulations and recent developments for financial markets that aim to improve the market microstructure are discussed. Well-known experts on the subject contribute to the chapters in the book. A must-read for academic researchers, students and quantitative practitioners.
Business students need the ability to think statistically about how to deal with uncertainty and its effect on decision-making in business and management. Traditional statistics courses and textbooks tend to focus on probability, mathematical detail, and heavy computation, and thus fail to meet the needs of future managers. Statistical Thinking in Business, Second Edition responds to the growing recognition that we must change the way business statistics is taught. It shows how statistics is important in all aspects of business and equips students with the skills they need to make sensible use of data and other information. The authors take an interactive, scenario-based approach and use almost no mathematical formulas, opting to use Excel for the technical work. This allows them to focus on using statistics to aid decision-making rather than how to perform routine calculations. New in the Second Edition A completely revised chapter on forecasting Re-arrangement of the material on data presentation with the inclusion of histograms and cumulative line plots A more thorough discussion of the analysis of attribute data Coverage of variable selection and model building in multiple regression End-of-chapter summaries More end-of-chapter problems A variety of case studies throughout the book The second edition also comes with a wealth of ancillary materials provided on downloadable resources packaged with the book. These include automatically-marked multiple-choice questions, answers to questions in the text, data sets, Excel experiments and demonstrations, an introduction to Excel, and the StiBstat Add-In for stem and leaf plots, box plots, distribution plots, control charts and summary statistics.
This book studies the information spillover among financial markets and explores the intraday effect and ACD models with high frequency data. This book also contributes theoretically by providing a new statistical methodology with comparative advantages for analyzing co-movements between two time series. It explores this new method by testing the information spillover between the Chinese stock market and the international market, futures market and spot market. Using the high frequency data, this book investigates the intraday effect and examines which type of ACD model is particularly suited in capturing financial duration dynamics. The book will be of invaluable use to scholars and graduate students interested in co-movements among different financial markets and financial market microstructure and to investors and regulation departments looking to improve their risk management.
Reissuing works originally published between 1929 and 1991, this collection of 17 volumes presents a variety of considerations on Econometrics, from introductions to specific research works on particular industries. With some volumes on models for macroeconomics and international economies, this is a widely interesting set of economic texts. Input/Output methods and databases are looked at in some volumes while others look at Bayesian techniques, linear and non-linear models. This set will be of use to those in industry and business studies, geography and sociology as well as politics and economics.
This book undertakes a theoretical and econometric analysis of intense economic growth in selected European countries during the end of the twentieth century and the beginning of the twenty first. Focusing on the accelerated economic growth that occurred in Ireland, the Netherlands, Spain, and Turkey, this book investigates the determinants and consequences of this "miracle" growth and discusses them in context of growth and development processes observed in European market-type economies after the World War II. Using imperfect knowledge economics (IKE) as a theoretical framework to interpret the empirical results, this book provides a fresh theoretical perspective in comparison with current Neo-classical, Keynesian and institutional paradigms. With this systematic approach, the authors seek to provide a unified methodology for evaluating the phenomenon of intense economic growth that has heretofore been missing from the discipline. Combining diverse theoretical and methodological strategies to provide a holistic understanding of the historical process of economic change, this volume will be of interest to students and scholars of economic growth, econometrics, political economy, and the new institutional economics as well as policymakers.
This study analyses the newly available statistical evidence on income distribution in the former Soviet Union both by social group and by republic, and considers the significance of inequalities as a factor contributing to the demise of the Communist regime. Among the topics covered are wage distribution (interbranch and skill differentials and distribution in terms of gender, education, and age), income distribution for the former USSR as a whole, and wage and income distribution patterns for each republic, with analysis of regional differences.
This book provides a coherent description of the main concepts and statistical methods used to analyse economic performance. The focus is on measures of performance that are of practical relevance to policy makers. Most, if not all, of these measures can be viewed as measures of productivity and/or efficiency. Linking fields as diverse as index number theory, data envelopment analysis and stochastic frontier analysis, the book explains how to compute measures of input and output quantity change that are consistent with measurement theory. It then discusses ways in which meaningful measures of productivity change can be decomposed into measures of technical progress, environmental change, and different types of efficiency change. The book is aimed at graduate students, researchers, statisticians, accountants and economists working in universities, regulatory authorities, government departments and private firms. The book contains many numerical examples. Computer codes and datasets are available on a companion website.
Originally published in 1984. This book addresses the economics of the changing mineral industry, which is highly affected by energy economics. The study estimates, in quantitative terms, the short- to mid-term consequences of rising energy prices alongside falling ore quality for the copper and aluminum industries. The effects of changing cost factors on substitution between metals is assessed as is the potential for relying on increased recycling. Copper and aluminum industry problems should be representative of those faced by the mineral processing sector as a whole. Two complex econometric models presented here produce forecasts for the industries and the book discusses and reviews other econometric commodity models.
Originally published in 1979. This study focuses primarily on the development of a structural model for the U. S. Government securities market, ie. the specification and estimation of the demands for disaggregated maturity classes of U.S. Government securities by the individual investor groups participating in the market. A particularly important issue addressed involves the extent of the substitution relationship among different maturity classes of U.S. Government securities.
Originally published in 1974. This book provides a rigorous and detailed introductory treatment of the theory of difference equations and their applications in the construction and analysis of dynamic economic models. It explains the theory of linear difference equations and various types of dynamic economic models are then analysed. Including plenty of examples of application throughout the text, it will be of use to those working in macroeconomics and econometrics.
Originally published in 1991. The dilemma of solid and hazardous waste disposal in an environmentally safe manner has become a global problem. This book presents a modern approach to economic and operations research modelling in urban and regional waste management with an international perspective. Location and space economics are discussed along with transportation, technology, health hazards, capacity levels, political realities and the linkage with general global economic systems. The algorithms and models developed are then applied to two major cities in the world by way of case study example of the use of these systems.
Originally published in 1984. This book brings together a reasonably complete set of results regarding the use of Constraint Item estimation procedures under the assumption of accurate specification. The analysis covers the case of all explanatory variables being non-stochastic as well as the case of identified simultaneous equations, with error terms known and unknown. Particular emphasis is given to the derivation of criteria for choosing the Constraint Item. Part 1 looks at the best CI estimators and Part 2 examines equation by equation estimation, considering forecasting accuracy.
Originally published in 1987. This collection of original papers deals with various issues of specification in the context of the linear statistical model. The volume honours the early econometric work of Donald Cochrane, late Dean of Economics and Politics at Monash University in Australia. The chapters focus on problems associated with autocorrelation of the error term in the linear regression model and include appraisals of early work on this topic by Cochrane and Orcutt. The book includes an extensive survey of autocorrelation tests; some exact finite-sample tests; and some issues in preliminary test estimation. A wide range of other specification issues is discussed, including the implications of random regressors for Bayesian prediction; modelling with joint conditional probability functions; and results from duality theory. There is a major survey chapter dealing with specification tests for non-nested models, and some of the applications discussed by the contributors deal with the British National Accounts and with Australian financial and housing markets.
The global financial crisis saw many Eurozone countries bearing excessive public debt. This led the government bond yields of some peripheral countries to rise sharply, resulting in the outbreak of the European sovereign debt crisis. The debt crisis is characterized by its immediate spread from Greece, the country of origin, to its neighbouring countries and the connection between the Eurozone banking sector and the public sector debt. Addressing these interesting features, this book sheds light on the impacts of the crisis on various financial markets in Europe. This book is among the first to conduct a thorough empirical analysis of the European sovereign debt crisis. It analyses, using advanced econometric methodologies, why the crisis escalated so prominently, having significant impacts on a wide range of financial markets, and was not just limited to government bond markets. The book also allows one to understand the consequences and the overall impact of such a debt crisis, enabling investors and policymakers to formulate diversification strategies, and create suitable regulatory frameworks.
First published in 1992, The Efficiency of New Issue Markets provides a theoretical discussion of the adverse selection model of the new issue market. It addresses the hypothesis that the method of distribution of new issues has an important bearing on the efficiency of these markets. In doing this, the book tests the efficiency of the Offer for Sale new issue market, which demonstrates the validity of the adverse selection model and contradicts the monopsony power hypothesis. This examines the relative efficiency of the new issue markets and in turn demonstrates the importance of distribution in determining relative efficiency. The book provides a comprehensive overview of under-pricing and through this assesses the efficiency of new issue markets.
This second edition of Design of Observational Studies is both an introduction to statistical inference in observational studies and a detailed discussion of the principles that guide the design of observational studies. An observational study is an empiric investigation of effects caused by treatments when randomized experimentation is unethical or infeasible. Observational studies are common in most fields that study the effects of treatments on people, including medicine, economics, epidemiology, education, psychology, political science and sociology. The quality and strength of evidence provided by an observational study is determined largely by its design. Design of Observational Studies is organized into five parts. Chapters 2, 3, and 5 of Part I cover concisely many of the ideas discussed in Rosenbaum's Observational Studies (also published by Springer) but in a less technical fashion. Part II discusses the practical aspects of using propensity scores and other tools to create a matched comparison that balances many covariates, and includes an updated chapter on matching in R. In Part III, the concept of design sensitivity is used to appraise the relative ability of competing designs to distinguish treatment effects from biases due to unmeasured covariates. Part IV is new to this edition; it discusses evidence factors and the computerized construction of more than one comparison group. Part V discusses planning the analysis of an observational study, with particular reference to Sir Ronald Fisher's striking advice for observational studies: "make your theories elaborate." This new edition features updated exploration of causal influence, with four new chapters, a new R package DOS2 designed as a companion for the book, and discussion of several of the latest matching packages for R. In particular, DOS2 allows readers to reproduce many analyses from Design of Observational Studies.
Since most datasets contain a number of variables, multivariate methods are helpful in answering a variety of research questions. Accessible to students and researchers without a substantial background in statistics or mathematics, Essentials of Multivariate Data Analysis explains the usefulness of multivariate methods in applied research. Unlike most books on multivariate methods, this one makes straightforward analyses easy to perform for those who are unfamiliar with advanced mathematical formulae. An easily understood dataset is used throughout to illustrate the techniques. The accompanying add-in for Microsoft Excel can be used to carry out the analyses in the text. The dataset and Excel add-in are available for download on the book's CRC Press web page. Providing a firm foundation in the most commonly used multivariate techniques, this text helps readers choose the appropriate method, learn how to apply it, and understand how to interpret the results. It prepares them for more complex analyses using software such as Minitab, R, SAS, SPSS, and Stata.
This short book introduces the main ideas of statistical inference in a way that is both user friendly and mathematically sound. Particular emphasis is placed on the common foundation of many models used in practice. In addition, the book focuses on the formulation of appropriate statistical models to study problems in business, economics, and the social sciences, as well as on how to interpret the results from statistical analyses. The book will be useful to students who are interested in rigorous applications of statistics to problems in business, economics and the social sciences, as well as students who have studied statistics in the past, but need a more solid grounding in statistical techniques to further their careers. Jacco Thijssen is professor of finance at the University of York, UK. He holds a PhD in mathematical economics from Tilburg University, Netherlands. His main research interests are in applications of optimal stopping theory, stochastic calculus, and game theory to problems in economics and finance. Professor Thijssen has earned several awards for his statistics teaching.
Updated to textbook form by popular demand, this second edition discusses diverse mathematical models used in economics, ecology, and the environmental sciences with emphasis on control and optimization. It is intended for graduate and upper-undergraduate course use, however, applied mathematicians, industry practitioners, and a vast number of interdisciplinary academics will find the presentation highly useful. Core topics of this text are: * Economic growth and technological development * Population dynamics and human impact on the environment * Resource extraction and scarcity * Air and water contamination * Rational management of the economy and environment * Climate change and global dynamics The step-by-step approach taken is problem-based and easy to follow. The authors aptly demonstrate that the same models may be used to describe different economic and environmental processes and that similar investigation techniques are applicable to analyze various models. Instructors will appreciate the substantial flexibility that this text allows while designing their own syllabus. Chapters are essentially self-contained and may be covered in full, in part, and in any order. Appropriate one- and two-semester courses include, but are not limited to, Applied Mathematical Modeling, Mathematical Methods in Economics and Environment, Models of Biological Systems, Applied Optimization Models, and Environmental Models. Prerequisites for the courses are Calculus and, preferably, Differential Equations.
'Refreshingly clear and engaging' Tim Harford 'Delightful . . . full of unique insights' Prof Sir David Spiegelhalter There's no getting away from statistics. We encounter them every day. We are all users of statistics whether we like it or not. Do missed appointments really cost the NHS GBP1bn per year? What's the difference between the mean gender pay gap and the median gender pay gap? How can we work out if a claim that we use 42 billion single-use plastic straws per year in the UK is accurate? What did the Vote Leave campaign's GBP350m bus really mean? How can we tell if the headline 'Public pensions cost you GBP4,000 a year' is correct? Does snow really cost the UK economy GBP1bn per day? But how do we distinguish statistical fact from fiction? What can we do to decide whether a number, claim or news story is accurate? Without an understanding of data, we cannot truly understand what is going on in the world around us. Written by Anthony Reuben, the BBC's first head of statistics, Statistical is an accessible and empowering guide to challenging the numbers all around us.
Written in a highly accessible style, A Factor Model Approach to Derivative Pricing lays a clear and structured foundation for the pricing of derivative securities based upon simple factor model related absence of arbitrage ideas. This unique and unifying approach provides for a broad treatment of topics and models, including equity, interest-rate, and credit derivatives, as well as hedging and tree-based computational methods, but without reliance on the heavy prerequisites that often accompany such topics. Key features A single fundamental absence of arbitrage relationship based on factor models is used to motivate all the results in the book A structured three-step procedure is used to guide the derivation of absence of arbitrage equations and illuminate core underlying concepts Brownian motion and Poisson process driven models are treated together, allowing for a broad and cohesive presentation of topics The final chapter provides a new approach to risk neutral pricing that introduces the topic as a seamless and natural extension of the factor model approach Whether being used as text for an intermediate level course in derivatives, or by researchers and practitioners who are seeking a better understanding of the fundamental ideas that underlie derivative pricing, readers will appreciate the book's ability to unify many disparate topics and models under a single conceptual theme. James A Primbs is an Associate Professor of Finance at the Mihaylo College of Business and Economics at California State University, Fullerton.
Customer and Business Analytics: Applied Data Mining for Business Decision Making Using R explains and demonstrates, via the accompanying open-source software, how advanced analytical tools can address various business problems. It also gives insight into some of the challenges faced when deploying these tools. Extensively classroom-tested, the text is ideal for students in customer and business analytics or applied data mining as well as professionals in small- to medium-sized organizations. The book offers an intuitive understanding of how different analytics algorithms work. Where necessary, the authors explain the underlying mathematics in an accessible manner. Each technique presented includes a detailed tutorial that enables hands-on experience with real data. The authors also discuss issues often encountered in applied data mining projects and present the CRISP-DM process model as a practical framework for organizing these projects. Showing how data mining can improve the performance of organizations, this book and its R-based software provide the skills and tools needed to successfully develop advanced analytics capabilities.
This book is about learning from data using the Generalized Additive Models for Location, Scale and Shape (GAMLSS). GAMLSS extends the Generalized Linear Models (GLMs) and Generalized Additive Models (GAMs) to accommodate large complex datasets, which are increasingly prevalent. In particular, the GAMLSS statistical framework enables flexible regression and smoothing models to be fitted to the data. The GAMLSS model assumes that the response variable has any parametric (continuous, discrete or mixed) distribution which might be heavy- or light-tailed, and positively or negatively skewed. In addition, all the parameters of the distribution (location, scale, shape) can be modelled as linear or smooth functions of explanatory variables. Key Features: Provides a broad overview of flexible regression and smoothing techniques to learn from data whilst also focusing on the practical application of methodology using GAMLSS software in R. Includes a comprehensive collection of real data examples, which reflect the range of problems addressed by GAMLSS models and provide a practical illustration of the process of using flexible GAMLSS models for statistical learning. R code integrated into the text for ease of understanding and replication. Supplemented by a website with code, data and extra materials. This book aims to help readers understand how to learn from data encountered in many fields. It will be useful for practitioners and researchers who wish to understand and use the GAMLSS models to learn from data and also for students who wish to learn GAMLSS through practical examples. |
You may like...
Handbook of Field Experiments, Volume 1
Esther Duflo, Abhijit Banerjee
Hardcover
R3,497
Discovery Miles 34 970
The Oxford Handbook of the Economics of…
Yann Bramoulle, Andrea Galeotti, …
Hardcover
R5,455
Discovery Miles 54 550
Operations and Supply Chain Management
James Evans, David Collier
Hardcover
Introductory Econometrics - A Modern…
Jeffrey Wooldridge
Hardcover
|