![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics
This book provides the reader with user-friendly applications of normal distribution. In several variables it is called the multinormal distribution which is often handled using matrices for convenience. The author seeks to make the arguments less abstract and hence, starts with the univariate case and moves progressively toward the vector and matrix cases. The approach used in the book is a gradual one, going from one scalar variable to a vector variable and to a matrix variable. The author presents the unified aspect of normal distribution, as well as addresses several other issues, including random matrix theory in physics. Other well-known applications, such as Herrnstein and Murray's argument that human intelligence is substantially influenced by both inherited and environmental factors, will be discussed in this book. It is a better predictor of many personal dynamics - including financial income, job performance, birth out of wedlock, and involvement in crime - than are an individual's parental socioeconomic status, or education level, and deserve to be mentioned and discussed.
This book investigates why economics makes less visible progress over time than scientific fields with a strong practical component, where interactions with physical technologies play a key role. The thesis of the book is that the main impediment to progress in economics is "false feedback", which it defines as the false result of an empirical study, such as empirical evidence produced by a statistical model that violates some of its assumptions. In contrast to scientific fields that work with physical technologies, false feedback is hard to recognize in economics. Economists thus have difficulties knowing where they stand in their inquiries, and false feedback will regularly lead them in the wrong directions. The book searches for the reasons behind the emergence of false feedback. It thereby contributes to a wider discussion in the field of metascience about the practices of researchers when pursuing their daily business. The book thus offers a case study of metascience for the field of empirical economics. The main strength of the book are the numerous smaller insights it provides throughout. The book delves into deep discussions of various theoretical issues, which it illustrates by many applied examples and a wide array of references, especially to philosophy of science. The book puts flesh on complicated and often abstract subjects, particularly when it comes to controversial topics such as p-hacking. The reader gains an understanding of the main challenges present in empirical economic research and also the possible solutions. The main audience of the book are all applied researchers working with data and, in particular, those who have found certain aspects of their research practice problematic.
Risk Measures and Insurance Solvency Benchmarks: Fixed-Probability Levels in Renewal Risk Models is written for academics and practitioners who are concerned about potential weaknesses of the Solvency II regulatory system. It is also intended for readers who are interested in pure and applied probability, have a taste for classical and asymptotic analysis, and are motivated to delve into rather intensive calculations. The formal prerequisite for this book is a good background in analysis. The desired prerequisite is some degree of probability training, but someone with knowledge of the classical real-variable theory, including asymptotic methods, will also find this book interesting. For those who find the proofs too complicated, it may be reassuring that most results in this book are formulated in rather elementary terms. This book can also be used as reading material for basic courses in risk measures, insurance mathematics, and applied probability. The material of this book was partly used by the author for his courses in several universities in Moscow, Copenhagen University, and in the University of Montreal. Features Requires only minimal mathematical prerequisites in analysis and probability Suitable for researchers and postgraduate students in related fields Could be used as a supplement to courses in risk measures, insurance mathematics and applied probability.
Often applied econometricians are faced with working with data that is less than ideal. The data may be observed with gaps in it, a model may suggest variables that are observed at different frequencies, and sometimes econometric results are very fragile to the inclusion or omission of just a few observations in the sample. Papers in this volume discuss new econometric techniques for addressing these problems.
This book provides in-depth analyses on accounting methods of GDP, statistic calibers and comparative perspectives on Chinese GDP. Beginning with an exploration of international comparisons of GDP, the book introduces the theoretical backgrounds, data sources, algorithms of the exchange rate method and the purchasing power parity method and discusses the advantages, disadvantages, and the latest developments in the two methods. This book further elaborates on the reasons for the imperfections of the Chinese GDP data including limitations of current statistical techniques and the accounting system, as well as the relatively confusing statistics for the service industry. The authors then make suggestions for improvement. Finally, the authors emphasize that evaluation of a country's economy and social development should not be solely limited to GDP, but should focus more on indicators of the comprehensive national power, national welfare, and the people's livelihood. This book will be of interest to economists, China-watchers, and scholars of geopolitics.
China's reform and opening-up have contributed to its long-term and rapid economic development, resulting in a much stronger economic strength and much better life for its people. Meanwhile, the deepening economic integration between China and the world has resulted in an increasingly complex environment, growing influencing factors and severe challenges to China's economic development. Under the "new normal" of the Chinese economy, accurate analysis of the economic situation is essential to scientific decision-making, sustainable and healthy economic development and to build a moderately prosperous society in all respects. By applying statistical and national economic accounting methods, and based on detailed statistics and national economic accounting data, this book presents an in-depth analysis of the key economic fields, such as real estate economy, automotive industry, high-tech industry, investment, opening-up, income distribution of residents, economic structure, balance of payments structure and financial operation, since the reform and opening-up, especially in recent years. It aims to depict the performance and characteristics of these key economic fields and their roles in the development of national economy, thus providing useful suggestions for economic decision-making, and facilitating the sustainable and healthy development of the economy and the realization of the goal of building a moderately prosperous society in all respects.
Co-integration, equilibrium and equilibrium correction are key
concepts in modern applications of econometrics to real world
problems. This book provides direction and guidance to the now vast
literature facing students and graduate economists. Econometric
theory is linked to practical issues such as how to identify
equilibrium relationships, how to deal with structural breaks
associated with regime changes and what to do when variables are of
different orders of integration.
The book has been tested and refined through years of classroom teaching experience. With an abundance of examples, problems, and fully worked out solutions, the text introduces the financial theory and relevant mathematical methods in a mathematically rigorous yet engaging way. This textbook provides complete coverage of discrete-time financial models that form the cornerstones of financial derivative pricing theory. Unlike similar texts in the field, this one presents multiple problem-solving approaches, linking related comprehensive techniques for pricing different types of financial derivatives. Key features: In-depth coverage of discrete-time theory and methodology. Numerous, fully worked out examples and exercises in every chapter. Mathematically rigorous and consistent yet bridging various basic and more advanced concepts. Judicious balance of financial theory, mathematical, and computational methods. Guide to Material. This revision contains: Almost 200 pages worth of new material in all chapters. A new chapter on elementary probability theory. An expanded the set of solved problems and additional exercises. Answers to all exercises. This book is a comprehensive, self-contained, and unified treatment of the main theory and application of mathematical methods behind modern-day financial mathematics. Table of Contents List of Figures and Tables Preface I Introduction to Pricing and Management of Financial Securities 1 Mathematics of Compounding 2 Primer on Pricing Risky Securities 3 Portfolio Management 4 Primer on Derivative Securities II Discrete-Time Modelling 5 Single-Period Arrow-Debreu Models 6 Introduction to Discrete-Time Stochastic Calculus 7 Replication and Pricing in the Binomial Tree Model 8 General Multi-Asset Multi-Period Model Appendices A Elementary Probability Theory B Glossary of Symbols and Abbreviations C Answers and Hints to Exercises References Index Biographies Giuseppe Campolieti is Professor of Mathematics at Wilfrid Laurier University in Waterloo, Canada. He has been Natural Sciences and Engineering Research Council postdoctoral research fellow and university research fellow at the University of Toronto. In 1998, he joined the Masters in Mathematical Finance as an instructor and later as an adjunct professor in financial mathematics until 2002. Dr. Campolieti also founded a financial software and consulting company in 1998. He joined Laurier in 2002 as Associate Professor of Mathematics and as SHARCNET Chair in Financial Mathematics. Roman N. Makarov is Associate Professor and Chair of Mathematics at Wilfrid Laurier University. Prior to joining Laurier in 2003, he was an Assistant Professor of Mathematics at Siberian State University of Telecommunications and Informatics and a senior research fellow at the Laboratory of Monte Carlo Methods at the Institute of Computational Mathematics and Mathematical Geophysics in Novosibirsk, Russia.
Data Stewardship for Open Science: Implementing FAIR Principles has been written with the intention of making scientists, funders, and innovators in all disciplines and stages of their professional activities broadly aware of the need, complexity, and challenges associated with open science, modern science communication, and data stewardship. The FAIR principles are used as a guide throughout the text, and this book should leave experimentalists consciously incompetent about data stewardship and motivated to respect data stewards as representatives of a new profession, while possibly motivating others to consider a career in the field. The ebook, avalable for no additional cost when you buy the paperback, will be updated every 6 months on average (providing that significant updates are needed or avaialble). Readers will have the opportunity to contribute material towards these updates, and to develop their own data management plans, via the free Data Stewardship Wizard.
Time Series: A First Course with Bootstrap Starter provides an introductory course on time series analysis that satisfies the triptych of (i) mathematical completeness, (ii) computational illustration and implementation, and (iii) conciseness and accessibility to upper-level undergraduate and M.S. students. Basic theoretical results are presented in a mathematically convincing way, and the methods of data analysis are developed through examples and exercises parsed in R. A student with a basic course in mathematical statistics will learn both how to analyze time series and how to interpret the results. The book provides the foundation of time series methods, including linear filters and a geometric approach to prediction. The important paradigm of ARMA models is studied in-depth, as well as frequency domain methods. Entropy and other information theoretic notions are introduced, with applications to time series modeling. The second half of the book focuses on statistical inference, the fitting of time series models, as well as computational facets of forecasting. Many time series of interest are nonlinear in which case classical inference methods can fail, but bootstrap methods may come to the rescue. Distinctive features of the book are the emphasis on geometric notions and the frequency domain, the discussion of entropy maximization, and a thorough treatment of recent computer-intensive methods for time series such as subsampling and the bootstrap. There are more than 600 exercises, half of which involve R coding and/or data analysis. Supplements include a website with 12 key data sets and all R code for the book's examples, as well as the solutions to exercises.
Introduction to Statistical Decision Theory: Utility Theory and Causal Analysis provides the theoretical background to approach decision theory from a statistical perspective. It covers both traditional approaches, in terms of value theory and expected utility theory, and recent developments, in terms of causal inference. The book is specifically designed to appeal to students and researchers that intend to acquire a knowledge of statistical science based on decision theory. Features Covers approaches for making decisions under certainty, risk, and uncertainty Illustrates expected utility theory and its extensions Describes approaches to elicit the utility function Reviews classical and Bayesian approaches to statistical inference based on decision theory Discusses the role of causal analysis in statistical decision theory
The advent of "Big Data" has brought with it a rapid diversification of data sources, requiring analysis that accounts for the fact that these data have often been generated and recorded for different reasons. Data integration involves combining data residing in different sources to enable statistical inference, or to generate new statistical data for purposes that cannot be served by each source on its own. This can yield significant gains for scientific as well as commercial investigations. However, valid analysis of such data should allow for the additional uncertainty due to entity ambiguity, whenever it is not possible to state with certainty that the integrated source is the target population of interest. Analysis of Integrated Data aims to provide a solid theoretical basis for this statistical analysis in three generic settings of entity ambiguity: statistical analysis of linked datasets that may contain linkage errors; datasets created by a data fusion process, where joint statistical information is simulated using the information in marginal data from non-overlapping sources; and estimation of target population size when target units are either partially or erroneously covered in each source. Covers a range of topics under an overarching perspective of data integration. Focuses on statistical uncertainty and inference issues arising from entity ambiguity. Features state of the art methods for analysis of integrated data. Identifies the important themes that will define future research and teaching in the statistical analysis of integrated data. Analysis of Integrated Data is aimed primarily at researchers and methodologists interested in statistical methods for data from multiple sources, with a focus on data analysts in the social sciences, and in the public and private sectors.
The beginning of the age of artificial intelligence and machine learning has created new challenges and opportunities for data analysts, statisticians, mathematicians, econometricians, computer scientists and many others. At the root of these techniques are algorithms and methods for clustering and classifying different types of large datasets, including time series data. Time Series Clustering and Classification includes relevant developments on observation-based, feature-based and model-based traditional and fuzzy clustering methods, feature-based and model-based classification methods, and machine learning methods. It presents a broad and self-contained overview of techniques for both researchers and students. Features Provides an overview of the methods and applications of pattern recognition of time series Covers a wide range of techniques, including unsupervised and supervised approaches Includes a range of real examples from medicine, finance, environmental science, and more R and MATLAB code, and relevant data sets are available on a supplementary website
The composition of portfolios is one of the most fundamental and important methods in financial engineering, used to control the risk of investments. This book provides a comprehensive overview of statistical inference for portfolios and their various applications. A variety of asset processes are introduced, including non-Gaussian stationary processes, nonlinear processes, non-stationary processes, and the book provides a framework for statistical inference using local asymptotic normality (LAN). The approach is generalized for portfolio estimation, so that many important problems can be covered. This book can primarily be used as a reference by researchers from statistics, mathematics, finance, econometrics, and genomics. It can also be used as a textbook by senior undergraduate and graduate students in these fields.
Principles of Copula Theory explores the state of the art on copulas and provides you with the foundation to use copulas in a variety of applications. Throughout the book, historical remarks and further readings highlight active research in the field, including new results, streamlined presentations, and new proofs of old results. After covering the essentials of copula theory, the book addresses the issue of modeling dependence among components of a random vector using copulas. It then presents copulas from the point of view of measure theory, compares methods for the approximation of copulas, and discusses the Markov product for 2-copulas. The authors also examine selected families of copulas that possess appealing features from both theoretical and applied viewpoints. The book concludes with in-depth discussions on two generalizations of copulas: quasi- and semi-copulas. Although copulas are not the solution to all stochastic problems, they are an indispensable tool for understanding several problems about stochastic dependence. This book gives you the solid and formal mathematical background to apply copulas to a range of mathematical areas, such as probability, real analysis, measure theory, and algebraic structures.
Winner of the 2017 De Groot Prize awarded by the International Society for Bayesian Analysis (ISBA) A relatively new area of research, adversarial risk analysis (ARA) informs decision making when there are intelligent opponents and uncertain outcomes. Adversarial Risk Analysis develops methods for allocating defensive or offensive resources against intelligent adversaries. Many examples throughout illustrate the application of the ARA approach to a variety of games and strategic situations. Focuses on the recent subfield of decision analysis, ARA Compares ideas from decision theory and game theory Uses multi-agent influence diagrams (MAIDs) throughout to help readers visualize complex information structures Applies the ARA approach to simultaneous games, auctions, sequential games, and defend-attack games Contains an extended case study based on a real application in railway security, which provides a blueprint for how to perform ARA in similar security situations Includes exercises at the end of most chapters, with selected solutions at the back of the book The book shows decision makers how to build Bayesian models for the strategic calculation of their opponents, enabling decision makers to maximize their expected utility or minimize their expected loss. This new approach to risk analysis asserts that analysts should use Bayesian thinking to describe their beliefs about an opponent's goals, resources, optimism, and type of strategic calculation, such as minimax and level-k thinking. Within that framework, analysts then solve the problem from the perspective of the opponent while placing subjective probability distributions on all unknown quantities. This produces a distribution over the actions of the opponent and enables analysts to maximize their expected utilities.
For courses in Econometrics. A Clear, Practical Introduction to Econometrics Using Econometrics: A Practical Guide offers students an innovative introduction to elementary econometrics. Through real-world examples and exercises, the book covers the topic of single-equation linear regression analysis in an easily understandable format. The Seventh Edition is appropriate for all levels: beginner econometric students, regression users seeking a refresher, and experienced practitioners who want a convenient reference. Praised as one of the most important texts in the last 30 years, the book retains its clarity and practicality in previous editions with a number of substantial improvements throughout.
* A useful guide to financial product modeling and to minimizing business risk and uncertainty * Looks at wide range of financial assets and markets and correlates them with enterprises' profitability * Introduces advanced and novel machine learning techniques in finance such as Support Vector Machine, Neural Networks, Random Forest, K-Nearest Neighbors, Extreme Learning Machine, Deep Learning Approaches and applies them to analyze finance data sets * Real world applicable examples to further understanding
Despite the unobserved components model (UCM) having many advantages over more popular forecasting techniques based on regression analysis, exponential smoothing, and ARIMA, the UCM is not well known among practitioners outside the academic community. Time Series Modelling with Unobserved Components rectifies this deficiency by giving a practical overview of the UCM approach, covering some theoretical details, several applications, and the software for implementing UCMs. The book's first part discusses introductory time series and prediction theory. Unlike most other books on time series, this text includes a chapter on prediction at the beginning because the problem of predicting is not limited to the field of time series analysis. The second part introduces the UCM, the state space form, and related algorithms. It also provides practical modeling strategies to build and select the UCM that best fits the needs of time series analysts. The third part presents real-world applications, with a chapter focusing on business cycle analysis and the construction of band-pass filters using UCMs. The book also reviews software packages that offer ready-to-use procedures for UCMs as well as systems popular among statisticians and econometricians that allow general estimation of models in state space form. This book demonstrates the numerous benefits of using UCMs to model time series data. UCMs are simple to specify, their results are easy to visualize and communicate to non-specialists, and their forecasting performance is competitive. Moreover, various types of outliers can easily be identified, missing values are effortlessly managed, and working contemporaneously with time series observed at different frequencies poses no problem.
Managers are often under great pressure to improve the performance of their organizations. To improve performance, one needs to constantly evaluate operations or processes related to producing products, providing services, and marketing and selling products. Performance evaluation and benchmarking are a widely used method to identify and adopt best practices as a means to improve performance and increase productivity, and are particularly valuable when no objective or engineered standard is available to define efficient and effective performance. For this reason, benchmarking is often used in managing service operations, because service standards (benchmarks) are more difficult to define than manufacturing standards. Benchmarks can be established but they are somewhat limited as they work with single measurements one at a time. It is difficult to evaluate an organization's performance when there are multiple inputs and outputs to the system. The difficulties are further enhanced when the relationships between the inputs and the outputs are complex and involve unknown tradeoffs. It is critical to show benchmarks where multiple measurements exist. The current book introduces the methodology of data envelopment analysis (DEA) and its uses in performance evaluation and benchmarking under the context of multiple performance measures.
Law and economics research has had an enormous impact on the laws of contracts, torts, property, crimes, corporations, and antitrust, as well as public regulation and fundamental rights. The Law and Economics of Patent Damages, Antitrust, and Legal Process examines several areas of important research by a variety of international scholars. It contains technical papers on the appropriate way to estimate damages in patent disputes, as well as methods for evaluating relevant markets and vertically integrated firms when determining the competitive effects of mergers and other actions. There are also papers on the implication of different legal processes, regulations, and liability rules on consumer welfare, which range from the impact of delays in legal decisions in labour cases in France to issues of criminal liability related to the use of artificial intelligence. This volume of Research in Law and Economics is a must-read for researchers and professionals of patent damages, antitrust, labour, and legal process.
This book provides a detailed introduction to the theoretical and methodological foundations of production efficiency analysis using benchmarking. Two of the more popular methods of efficiency evaluation are Stochastic Frontier Analysis (SFA) and Data Envelopment Analysis (DEA), both of which are based on the concept of a production possibility set and its frontier. Depending on the assumed objectives of the decision-making unit, a Production, Cost, or Profit Frontier is constructed from observed data on input and output quantities and prices. While SFA uses different maximum likelihood estimation techniques to estimate a parametric frontier, DEA relies on mathematical programming to create a nonparametric frontier. Yet another alternative is the Convex Nonparametric Frontier, which is based on the assumed convexity of the production possibility set and creates a piecewise linear frontier consisting of a number of tangent hyper planes. Three of the papers in this volume provide a detailed and relatively easy to follow exposition of the underlying theory from neoclassical production economics and offer step-by-step instructions on the appropriate model to apply in different contexts and how to implement them. Of particular appeal are the instructions on (i) how to write the codes for different SFA models on STATA, (ii) how to write a VBA Macro for repetitive solution of the DEA problem for each production unit on Excel Solver, and (iii) how to write the codes for the Nonparametric Convex Frontier estimation. The three other papers in the volume are primarily theoretical and will be of interest to PhD students and researchers hoping to make methodological and conceptual contributions to the field of nonparametric efficiency analysis.
This book scientifically tests the assertion that accommodative monetary policy can eliminate the "crowd out" problem, allowing fiscal stimulus programs (such as tax cuts or increased government spending) to stimulate the economy as intended. It also tests to see if natural growth in th economy can cure the crowd out problem as well or better. The book is intended to be the largest scale scientific test ever performed on this topic. It includes about 800 separate statistical tests on the U.S. economy testing different parts or all of the period 1960 - 2010. These tests focus on whether accommodative monetary policy, which increases the pool of loanable resources, can offset the crowd out problem as well as natural growth in the economy. The book, employing the best scientific methods available to economists for this type of problem, concludes accommodate monetary policy could have, but until the quantitative easing program, Federal Reserve efforts to accommodate fiscal stimulus programs were not large enough to offset more than 23% to 44% of any one year's crowd out problem. That provides the science part of the answer as to why accommodative monetary policy didn't accommodate: too little of it was tried. The book also tests whether other increases in loanable funds, occurring because of natural growth in the economy or changes in the savings rate can also offset crowd out. It concludes they can, and that these changes tend to be several times as effective as accommodative monetary policy. This book's companion volume Why Fiscal Stimulus Programs Fail explores the policy implications of these results.
Complex dynamics constitute a growing and increasingly important area as they offer a strong potential to explain and formalize natural, physical, financial and economic phenomena. This book pursues the ambitious goal to bring together an extensive body of knowledge regarding complex dynamics from various academic disciplines. Beyond its focus on economics and finance, including for instance the evolution of macroeconomic growth models towards nonlinear structures as well as signal processing applications to stock markets, fundamental parts of the book are devoted to the use of nonlinear dynamics in mathematics, statistics, signal theory and processing. Numerous examples and applications, almost 700 illustrations and numerical simulations based on the use of Matlab make the book an essential reference for researchers and students from many different disciplines who are interested in the nonlinear field. An appendix recapitulates the basic mathematical concepts required to use the book.
Financial econometrics combines mathematical and statistical theory and techniques to understand and solve problems in financial economics. Modeling and forecasting financial time series, such as prices, returns, interest rates, financial ratios, and defaults, are important parts of this field. In Financial Econometrics, you'll be introduced to this growing discipline and the concepts associated with it--from background material on probability theory and statistics to information regarding the properties of specific models and their estimation procedures. With this book as your guide, you'll become familiar with: Autoregressive conditional heteroskedasticity (ARCH) and GARCH modeling Principal components analysis (PCA) and factor analysis Stable processes and ARMA and GARCH models with fat-tailed errors Robust estimation methods Vector autoregressive and cointegrated processes, including advanced estimation methods for cointegrated systems And much more The experienced author team of Svetlozar Rachev, Stefan Mittnik, Frank Fabozzi, Sergio Focardi, and Teo Jasic not only presents you with an abundant amount of information on financial econometrics, but they also walk you through a wide array of examples to solidify your understanding of the issues discussed. Filled with in-depth insights and expert advice, Financial Econometrics provides comprehensive coverage of this discipline and clear explanations of how the models associated with it fit into today's investment management process. |
You may like...
|