![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics
This book provides in-depth analyses on accounting methods of GDP, statistic calibers and comparative perspectives on Chinese GDP. Beginning with an exploration of international comparisons of GDP, the book introduces the theoretical backgrounds, data sources, algorithms of the exchange rate method and the purchasing power parity method and discusses the advantages, disadvantages, and the latest developments in the two methods. This book further elaborates on the reasons for the imperfections of the Chinese GDP data including limitations of current statistical techniques and the accounting system, as well as the relatively confusing statistics for the service industry. The authors then make suggestions for improvement. Finally, the authors emphasize that evaluation of a country's economy and social development should not be solely limited to GDP, but should focus more on indicators of the comprehensive national power, national welfare, and the people's livelihood. This book will be of interest to economists, China-watchers, and scholars of geopolitics.
*Furnishes a thorough introduction and detailed information about the linear regression model, including how to understand and interpret its results, test assumptions, and adapt the model when assumptions are not satisfied. *Uses numerous graphs in R to illustrate the model's results, assumptions, and other features. *Does not assume a background in calculus or linear algebra; rather, an introductory statistics course and familiarity with elementary algebra are sufficient. *Provides many examples using real world datasets relevant to various academic disciplines. *Fully integrates the R software environment in its numerous examples.
In this book leading German econometricians in different fields present survey articles of the most important new methods in econometrics. The book gives an overview of the field and it shows progress made in recent years and remaining problems.
Co-integration, equilibrium and equilibrium correction are key
concepts in modern applications of econometrics to real world
problems. This book provides direction and guidance to the now vast
literature facing students and graduate economists. Econometric
theory is linked to practical issues such as how to identify
equilibrium relationships, how to deal with structural breaks
associated with regime changes and what to do when variables are of
different orders of integration.
The book has been tested and refined through years of classroom teaching experience. With an abundance of examples, problems, and fully worked out solutions, the text introduces the financial theory and relevant mathematical methods in a mathematically rigorous yet engaging way. This textbook provides complete coverage of discrete-time financial models that form the cornerstones of financial derivative pricing theory. Unlike similar texts in the field, this one presents multiple problem-solving approaches, linking related comprehensive techniques for pricing different types of financial derivatives. Key features: In-depth coverage of discrete-time theory and methodology. Numerous, fully worked out examples and exercises in every chapter. Mathematically rigorous and consistent yet bridging various basic and more advanced concepts. Judicious balance of financial theory, mathematical, and computational methods. Guide to Material. This revision contains: Almost 200 pages worth of new material in all chapters. A new chapter on elementary probability theory. An expanded the set of solved problems and additional exercises. Answers to all exercises. This book is a comprehensive, self-contained, and unified treatment of the main theory and application of mathematical methods behind modern-day financial mathematics. Table of Contents List of Figures and Tables Preface I Introduction to Pricing and Management of Financial Securities 1 Mathematics of Compounding 2 Primer on Pricing Risky Securities 3 Portfolio Management 4 Primer on Derivative Securities II Discrete-Time Modelling 5 Single-Period Arrow-Debreu Models 6 Introduction to Discrete-Time Stochastic Calculus 7 Replication and Pricing in the Binomial Tree Model 8 General Multi-Asset Multi-Period Model Appendices A Elementary Probability Theory B Glossary of Symbols and Abbreviations C Answers and Hints to Exercises References Index Biographies Giuseppe Campolieti is Professor of Mathematics at Wilfrid Laurier University in Waterloo, Canada. He has been Natural Sciences and Engineering Research Council postdoctoral research fellow and university research fellow at the University of Toronto. In 1998, he joined the Masters in Mathematical Finance as an instructor and later as an adjunct professor in financial mathematics until 2002. Dr. Campolieti also founded a financial software and consulting company in 1998. He joined Laurier in 2002 as Associate Professor of Mathematics and as SHARCNET Chair in Financial Mathematics. Roman N. Makarov is Associate Professor and Chair of Mathematics at Wilfrid Laurier University. Prior to joining Laurier in 2003, he was an Assistant Professor of Mathematics at Siberian State University of Telecommunications and Informatics and a senior research fellow at the Laboratory of Monte Carlo Methods at the Institute of Computational Mathematics and Mathematical Geophysics in Novosibirsk, Russia.
Modelling Spatial and Spatial-Temporal Data: A Bayesian Approach is aimed at statisticians and quantitative social, economic and public health students and researchers who work with small-area spatial and spatial-temporal data. It assumes a grounding in statistical theory up to the standard linear regression model. The book compares both hierarchical and spatial econometric modelling, providing both a reference and a teaching text with exercises in each chapter. The book provides a fully Bayesian, self-contained, treatment of the underlying statistical theory, with chapters dedicated to substantive applications. The book includes WinBUGS code and R code and all datasets are available online. Part I covers fundamental issues arising when modelling spatial and spatial-temporal data. Part II focuses on modelling cross-sectional spatial data and begins by describing exploratory methods that help guide the modelling process. There are then two theoretical chapters on Bayesian models and a chapter of applications. Two chapters follow on spatial econometric modelling, one describing different models, the other substantive applications. Part III discusses modelling spatial-temporal data, first introducing models for time series data. Exploratory methods for detecting different types of space-time interaction are presented, followed by two chapters on the theory of space-time separable (without space-time interaction) and inseparable (with space-time interaction) models. An applications chapter includes: the evaluation of a policy intervention; analysing the temporal dynamics of crime hotspots; chronic disease surveillance; and testing for evidence of spatial spillovers in the spread of an infectious disease. A final chapter suggests some future directions and challenges. Robert Haining is Emeritus Professor in Human Geography, University of Cambridge, England. He is the author of Spatial Data Analysis in the Social and Environmental Sciences (1990) and Spatial Data Analysis: Theory and Practice (2003). He is a Fellow of the RGS-IBG and of the Academy of Social Sciences. Guangquan Li is Senior Lecturer in Statistics in the Department of Mathematics, Physics and Electrical Engineering, Northumbria University, Newcastle, England. His research includes the development and application of Bayesian methods in the social and health sciences. He is a Fellow of the Royal Statistical Society.
* A useful guide to financial product modeling and to minimizing business risk and uncertainty * Looks at wide range of financial assets and markets and correlates them with enterprises' profitability * Introduces advanced and novel machine learning techniques in finance such as Support Vector Machine, Neural Networks, Random Forest, K-Nearest Neighbors, Extreme Learning Machine, Deep Learning Approaches and applies them to analyze finance data sets * Real world applicable examples to further understanding
This book covers the econometric methodsnecessary for a practicing applied economist or data analyst. This requiresboth an understanding of statistical theory and how it is used in actual applications. Chapters 1 to 9 present the material concerned with basic statistical theory. Chapters 10 to 13 introduce a number of topics which form the basis of more advanced option modules, such as time series methods in applied econometrics. To get the most out of these topics, companion files include Excel datasets and 4-color figures. It includes pull down menus to graph the data, calculate sample statistics and estimate regression equations. FEATURES: Integration of econometrics methods with statistical foundations Worked examples of all models considered in the text Includes Excel datasheets to facilitate estimation and application of models Features instructor ancillaries for use as atextbook
* Includes many mathematical examples and problems for students to work directly with both standard and nonstandard models of behaviour to develop problem-solving and critical-thinking skills which are more valuable to students than memorizing content which will quickly be forgotten. * The applications explored in the text emphasise issues of inequality, social mobility, culture and poverty to demonstrate the impact of behavioral economics in areas which students are most passionate about. * The text has a standardized structure (6 parts, 3 chapters in each) which provides a clear and consistent roadmap for students taking the course.
This book presents selected peer-reviewed contributions from the International Conference on Time Series and Forecasting, ITISE 2018, held in Granada, Spain, on September 19-21, 2018. The first three parts of the book focus on the theory of time series analysis and forecasting, and discuss statistical methods, modern computational intelligence methodologies, econometric models, financial forecasting, and risk analysis. In turn, the last three parts are dedicated to applied topics and include papers on time series analysis in the earth sciences, energy time series forecasting, and time series analysis and prediction in other real-world problems. The book offers readers valuable insights into the different aspects of time series analysis and forecasting, allowing them to benefit both from its sophisticated and powerful theory, and from its practical applications, which address real-world problems in a range of disciplines. The ITISE conference series provides a valuable forum for scientists, engineers, educators and students to discuss the latest advances and implementations in the field of time series analysis and forecasting. It focuses on interdisciplinary and multidisciplinary research encompassing computer science, mathematics, statistics and econometrics.
Principles of Copula Theory explores the state of the art on copulas and provides you with the foundation to use copulas in a variety of applications. Throughout the book, historical remarks and further readings highlight active research in the field, including new results, streamlined presentations, and new proofs of old results. After covering the essentials of copula theory, the book addresses the issue of modeling dependence among components of a random vector using copulas. It then presents copulas from the point of view of measure theory, compares methods for the approximation of copulas, and discusses the Markov product for 2-copulas. The authors also examine selected families of copulas that possess appealing features from both theoretical and applied viewpoints. The book concludes with in-depth discussions on two generalizations of copulas: quasi- and semi-copulas. Although copulas are not the solution to all stochastic problems, they are an indispensable tool for understanding several problems about stochastic dependence. This book gives you the solid and formal mathematical background to apply copulas to a range of mathematical areas, such as probability, real analysis, measure theory, and algebraic structures.
This book scientifically tests the assertion that accommodative monetary policy can eliminate the "crowd out" problem, allowing fiscal stimulus programs (such as tax cuts or increased government spending) to stimulate the economy as intended. It also tests to see if natural growth in th economy can cure the crowd out problem as well or better. The book is intended to be the largest scale scientific test ever performed on this topic. It includes about 800 separate statistical tests on the U.S. economy testing different parts or all of the period 1960 - 2010. These tests focus on whether accommodative monetary policy, which increases the pool of loanable resources, can offset the crowd out problem as well as natural growth in the economy. The book, employing the best scientific methods available to economists for this type of problem, concludes accommodate monetary policy could have, but until the quantitative easing program, Federal Reserve efforts to accommodate fiscal stimulus programs were not large enough to offset more than 23% to 44% of any one year's crowd out problem. That provides the science part of the answer as to why accommodative monetary policy didn't accommodate: too little of it was tried. The book also tests whether other increases in loanable funds, occurring because of natural growth in the economy or changes in the savings rate can also offset crowd out. It concludes they can, and that these changes tend to be several times as effective as accommodative monetary policy. This book's companion volume Why Fiscal Stimulus Programs Fail explores the policy implications of these results.
This book has taken form over several years as a result of a number of courses taught at the University of Pennsylvania and at Columbia University and a series of lectures I have given at the International Monetary Fund. Indeed, I began writing down my notes systematically during the academic year 1972-1973 while at the University of California, Los Angeles. The diverse character of the audience, as well as my own conception of what an introductory and often terminal acquaintance with formal econometrics ought to encompass, have determined the style and content of this volume. The selection of topics and the level of discourse give sufficient variety so that the book can serve as the basis for several types of courses. As an example, a relatively elementary one-semester course can be based on Chapters one through five, omitting the appendices to these chapters and a few sections in some of the chapters so indicated. This would acquaint the student with the basic theory of the general linear model, some of the prob lems often encountered in empirical research, and some proposed solutions. For such a course, I should also recommend a brief excursion into Chapter seven (logit and pro bit analysis) in view of the increasing availability of data sets for which this type of analysis is more suitable than that based on the general linear model."
For courses in Econometrics. A Clear, Practical Introduction to Econometrics Using Econometrics: A Practical Guide offers students an innovative introduction to elementary econometrics. Through real-world examples and exercises, the book covers the topic of single-equation linear regression analysis in an easily understandable format. The Seventh Edition is appropriate for all levels: beginner econometric students, regression users seeking a refresher, and experienced practitioners who want a convenient reference. Praised as one of the most important texts in the last 30 years, the book retains its clarity and practicality in previous editions with a number of substantial improvements throughout.
Financial econometrics combines mathematical and statistical theory and techniques to understand and solve problems in financial economics. Modeling and forecasting financial time series, such as prices, returns, interest rates, financial ratios, and defaults, are important parts of this field. In Financial Econometrics, you'll be introduced to this growing discipline and the concepts associated with it--from background material on probability theory and statistics to information regarding the properties of specific models and their estimation procedures. With this book as your guide, you'll become familiar with: Autoregressive conditional heteroskedasticity (ARCH) and GARCH modeling Principal components analysis (PCA) and factor analysis Stable processes and ARMA and GARCH models with fat-tailed errors Robust estimation methods Vector autoregressive and cointegrated processes, including advanced estimation methods for cointegrated systems And much more The experienced author team of Svetlozar Rachev, Stefan Mittnik, Frank Fabozzi, Sergio Focardi, and Teo Jasic not only presents you with an abundant amount of information on financial econometrics, but they also walk you through a wide array of examples to solidify your understanding of the issues discussed. Filled with in-depth insights and expert advice, Financial Econometrics provides comprehensive coverage of this discipline and clear explanations of how the models associated with it fit into today's investment management process.
"Prof. Nitis Mukhopadhyay and Prof. Partha Pratim Sengupta, who edited this volume with great attention and rigor, have certainly carried out noteworthy activities." - Giovanni Maria Giorgi, University of Rome (Sapienza) "This book is an important contribution to the development of indices of disparity and dissatisfaction in the age of globalization and social strife." - Shelemyahu Zacks, SUNY-Binghamton "It will not be an overstatement when I say that the famous income inequality index or wealth inequality index, which is most widely accepted across the globe is named after Corrado Gini (1984-1965). ... I take this opportunity to heartily applaud the two co-editors for spending their valuable time and energy in putting together a wonderful collection of papers written by the acclaimed researchers on selected topics of interest today. I am very impressed, and I believe so will be its readers." - K.V. Mardia, University of Leeds Gini coefficient or Gini index was originally defined as a standardized measure of statistical dispersion intended to understand an income distribution. It has evolved into quantifying inequity in all kinds of distributions of wealth, gender parity, access to education and health services, environmental policies, and numerous other attributes of importance. Gini Inequality Index: Methods and Applications features original high-quality peer-reviewed chapters prepared by internationally acclaimed researchers. They provide innovative methodologies whether quantitative or qualitative, covering welfare economics, development economics, optimization/non-optimization, econometrics, air quality, statistical learning, inference, sample size determination, big data science, and some heuristics. Never before has such a wide dimension of leading research inspired by Gini's works and their applicability been collected in one edited volume. The volume also showcases modern approaches to the research of a number of very talented and upcoming younger contributors and collaborators. This feature will give readers a window with a distinct view of what emerging research in this field may entail in the near future.
Time Series: A First Course with Bootstrap Starter provides an introductory course on time series analysis that satisfies the triptych of (i) mathematical completeness, (ii) computational illustration and implementation, and (iii) conciseness and accessibility to upper-level undergraduate and M.S. students. Basic theoretical results are presented in a mathematically convincing way, and the methods of data analysis are developed through examples and exercises parsed in R. A student with a basic course in mathematical statistics will learn both how to analyze time series and how to interpret the results. The book provides the foundation of time series methods, including linear filters and a geometric approach to prediction. The important paradigm of ARMA models is studied in-depth, as well as frequency domain methods. Entropy and other information theoretic notions are introduced, with applications to time series modeling. The second half of the book focuses on statistical inference, the fitting of time series models, as well as computational facets of forecasting. Many time series of interest are nonlinear in which case classical inference methods can fail, but bootstrap methods may come to the rescue. Distinctive features of the book are the emphasis on geometric notions and the frequency domain, the discussion of entropy maximization, and a thorough treatment of recent computer-intensive methods for time series such as subsampling and the bootstrap. There are more than 600 exercises, half of which involve R coding and/or data analysis. Supplements include a website with 12 key data sets and all R code for the book's examples, as well as the solutions to exercises.
Introduction to Statistical Decision Theory: Utility Theory and Causal Analysis provides the theoretical background to approach decision theory from a statistical perspective. It covers both traditional approaches, in terms of value theory and expected utility theory, and recent developments, in terms of causal inference. The book is specifically designed to appeal to students and researchers that intend to acquire a knowledge of statistical science based on decision theory. Features Covers approaches for making decisions under certainty, risk, and uncertainty Illustrates expected utility theory and its extensions Describes approaches to elicit the utility function Reviews classical and Bayesian approaches to statistical inference based on decision theory Discusses the role of causal analysis in statistical decision theory
The advent of "Big Data" has brought with it a rapid diversification of data sources, requiring analysis that accounts for the fact that these data have often been generated and recorded for different reasons. Data integration involves combining data residing in different sources to enable statistical inference, or to generate new statistical data for purposes that cannot be served by each source on its own. This can yield significant gains for scientific as well as commercial investigations. However, valid analysis of such data should allow for the additional uncertainty due to entity ambiguity, whenever it is not possible to state with certainty that the integrated source is the target population of interest. Analysis of Integrated Data aims to provide a solid theoretical basis for this statistical analysis in three generic settings of entity ambiguity: statistical analysis of linked datasets that may contain linkage errors; datasets created by a data fusion process, where joint statistical information is simulated using the information in marginal data from non-overlapping sources; and estimation of target population size when target units are either partially or erroneously covered in each source. Covers a range of topics under an overarching perspective of data integration. Focuses on statistical uncertainty and inference issues arising from entity ambiguity. Features state of the art methods for analysis of integrated data. Identifies the important themes that will define future research and teaching in the statistical analysis of integrated data. Analysis of Integrated Data is aimed primarily at researchers and methodologists interested in statistical methods for data from multiple sources, with a focus on data analysts in the social sciences, and in the public and private sectors.
The beginning of the age of artificial intelligence and machine learning has created new challenges and opportunities for data analysts, statisticians, mathematicians, econometricians, computer scientists and many others. At the root of these techniques are algorithms and methods for clustering and classifying different types of large datasets, including time series data. Time Series Clustering and Classification includes relevant developments on observation-based, feature-based and model-based traditional and fuzzy clustering methods, feature-based and model-based classification methods, and machine learning methods. It presents a broad and self-contained overview of techniques for both researchers and students. Features Provides an overview of the methods and applications of pattern recognition of time series Covers a wide range of techniques, including unsupervised and supervised approaches Includes a range of real examples from medicine, finance, environmental science, and more R and MATLAB code, and relevant data sets are available on a supplementary website
This book aims to help the reader better understand the importance of data analysis in project management. Moreover, it provides guidance by showing tools, methods, techniques and lessons learned on how to better utilize the data gathered from the projects. First and foremost, insight into the bridge between data analytics and project management aids practitioners looking for ways to maximize the practical value of data procured. The book equips organizations with the know-how necessary to adapt to a changing workplace dynamic through key lessons learned from past ventures. The book's integrated approach to investigating both fields enhances the value of research findings.
Data Stewardship for Open Science: Implementing FAIR Principles has been written with the intention of making scientists, funders, and innovators in all disciplines and stages of their professional activities broadly aware of the need, complexity, and challenges associated with open science, modern science communication, and data stewardship. The FAIR principles are used as a guide throughout the text, and this book should leave experimentalists consciously incompetent about data stewardship and motivated to respect data stewards as representatives of a new profession, while possibly motivating others to consider a career in the field. The ebook, avalable for no additional cost when you buy the paperback, will be updated every 6 months on average (providing that significant updates are needed or avaialble). Readers will have the opportunity to contribute material towards these updates, and to develop their own data management plans, via the free Data Stewardship Wizard.
The composition of portfolios is one of the most fundamental and important methods in financial engineering, used to control the risk of investments. This book provides a comprehensive overview of statistical inference for portfolios and their various applications. A variety of asset processes are introduced, including non-Gaussian stationary processes, nonlinear processes, non-stationary processes, and the book provides a framework for statistical inference using local asymptotic normality (LAN). The approach is generalized for portfolio estimation, so that many important problems can be covered. This book can primarily be used as a reference by researchers from statistics, mathematics, finance, econometrics, and genomics. It can also be used as a textbook by senior undergraduate and graduate students in these fields.
Despite the unobserved components model (UCM) having many advantages over more popular forecasting techniques based on regression analysis, exponential smoothing, and ARIMA, the UCM is not well known among practitioners outside the academic community. Time Series Modelling with Unobserved Components rectifies this deficiency by giving a practical overview of the UCM approach, covering some theoretical details, several applications, and the software for implementing UCMs. The book's first part discusses introductory time series and prediction theory. Unlike most other books on time series, this text includes a chapter on prediction at the beginning because the problem of predicting is not limited to the field of time series analysis. The second part introduces the UCM, the state space form, and related algorithms. It also provides practical modeling strategies to build and select the UCM that best fits the needs of time series analysts. The third part presents real-world applications, with a chapter focusing on business cycle analysis and the construction of band-pass filters using UCMs. The book also reviews software packages that offer ready-to-use procedures for UCMs as well as systems popular among statisticians and econometricians that allow general estimation of models in state space form. This book demonstrates the numerous benefits of using UCMs to model time series data. UCMs are simple to specify, their results are easy to visualize and communicate to non-specialists, and their forecasting performance is competitive. Moreover, various types of outliers can easily be identified, missing values are effortlessly managed, and working contemporaneously with time series observed at different frequencies poses no problem.
Winner of the 2017 De Groot Prize awarded by the International Society for Bayesian Analysis (ISBA) A relatively new area of research, adversarial risk analysis (ARA) informs decision making when there are intelligent opponents and uncertain outcomes. Adversarial Risk Analysis develops methods for allocating defensive or offensive resources against intelligent adversaries. Many examples throughout illustrate the application of the ARA approach to a variety of games and strategic situations. Focuses on the recent subfield of decision analysis, ARA Compares ideas from decision theory and game theory Uses multi-agent influence diagrams (MAIDs) throughout to help readers visualize complex information structures Applies the ARA approach to simultaneous games, auctions, sequential games, and defend-attack games Contains an extended case study based on a real application in railway security, which provides a blueprint for how to perform ARA in similar security situations Includes exercises at the end of most chapters, with selected solutions at the back of the book The book shows decision makers how to build Bayesian models for the strategic calculation of their opponents, enabling decision makers to maximize their expected utility or minimize their expected loss. This new approach to risk analysis asserts that analysts should use Bayesian thinking to describe their beliefs about an opponent's goals, resources, optimism, and type of strategic calculation, such as minimax and level-k thinking. Within that framework, analysts then solve the problem from the perspective of the opponent while placing subjective probability distributions on all unknown quantities. This produces a distribution over the actions of the opponent and enables analysts to maximize their expected utilities. |
You may like...
Design and Analysis of Time Series…
Richard McCleary, David McDowall, …
Hardcover
R3,286
Discovery Miles 32 860
The Oxford Handbook of the Economics of…
Yann Bramoulle, Andrea Galeotti, …
Hardcover
R5,455
Discovery Miles 54 550
Introduction to Computational Economics…
Hans Fehr, Fabian Kindermann
Hardcover
R4,258
Discovery Miles 42 580
Macroeconomics and the Real World…
Roger E. Backhouse, Andrea Salanti
Hardcover
R4,479
Discovery Miles 44 790
Operations And Supply Chain Management
David Collier, James Evans
Hardcover
Qualitative Techniques for Workplace…
Manish Gupta, Musarrat Shaheen, …
Hardcover
R5,332
Discovery Miles 53 320
Quantitative statistical techniques
Swanepoel Swanepoel, Vivier Vivier, …
Paperback
(2)R751 Discovery Miles 7 510
Operations and Supply Chain Management
James Evans, David Collier
Hardcover
Linear and Non-Linear Financial…
Mehmet Kenan Terzioglu, Gordana Djurovic
Hardcover
R3,581
Discovery Miles 35 810
|