![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics
This book develops a machine-learning framework for predicting economic growth. It can also be considered as a primer for using machine learning (also known as data mining or data analytics) to answer economic questions. While machine learning itself is not a new idea, advances in computing technology combined with a dawning realization of its applicability to economic questions makes it a new tool for economists.
Institutions are the formal or informal 'rules of the game' that facilitate economic, social, and political interactions. These include such things as legal rules, property rights, constitutions, political structures, and norms and customs. The main theoretical insights from Austrian economics regarding private property rights and prices, entrepreneurship, and spontaneous order mechanisms play a key role in advancing institutional economics. The Austrian economics framework provides an understanding for which institutions matter for growth, how they matter, and how they emerge and can change over time. Specifically, Austrians have contributed significantly to the areas of institutional stickiness and informal institutions, self-governance and self-enforcing contracts, institutional entrepreneurship, and the political infrastructure for development.
This volume presents classical results of the theory of enlargement of filtration. The focus is on the behavior of martingales with respect to the enlarged filtration and related objects. The study is conducted in various contexts including immersion, progressive enlargement with a random time and initial enlargement with a random variable. The aim of this book is to collect the main mathematical results (with proofs) previously spread among numerous papers, great part of which is only available in French. Many examples and applications to finance, in particular to credit risk modelling and the study of asymmetric information, are provided to illustrate the theory. A detailed summary of further connections and applications is given in bibliographic notes which enables to deepen study of the topic. This book fills a gap in the literature and serves as a guide for graduate students and researchers interested in the role of information in financial mathematics and in econometric science. A basic knowledge of the general theory of stochastic processes is assumed as a prerequisite.
This is the first textbook designed to teach statistics to students in aviation courses. All examples and exercises are grounded in an aviation context, including flight instruction, air traffic control, airport management, and human factors. Structured in six parts, theiscovers the key foundational topics relative to descriptive and inferential statistics, including hypothesis testing, confidence intervals, z and t tests, correlation, regression, ANOVA, and chi-square. In addition, this book promotes both procedural knowledge and conceptual understanding. Detailed, guided examples are presented from the perspective of conducting a research study. Each analysis technique is clearly explained, enabling readers to understand, carry out, and report results correctly. Students are further supported by a range of pedagogical features in each chapter, including objectives, a summary, and a vocabulary check. Digital supplements comprise downloadable data sets and short video lectures explaining key concepts. Instructors also have access to PPT slides and an instructor’s manual that consists of a test bank with multiple choice exams, exercises with data sets, and solutions. This is the ideal statistics textbook for aviation courses globally, especially in aviation statistics, research methods in aviation, human factors, and related areas.
"The level is appropriate for an upper-level undergraduate or graduate-level statistics major. Sampling: Design and Analysis (SDA) will also benefit a non-statistics major with a desire to understand the concepts of sampling from a finite population. A student with patience to delve into the rigor of survey statistics will gain even more from the content that SDA offers. The updates to SDA have potential to enrich traditional survey sampling classes at both the undergraduate and graduate levels. The new discussions of low response rates, non-probability surveys, and internet as a data collection mode hold particular value, as these statistical issues have become increasingly important in survey practice in recent years… I would eagerly adopt the new edition of SDA as the required textbook." (Emily Berg, Iowa State University)
Most academic and policy commentary represents adverse selection as a severe problem in insurance, which should always be deprecated, avoided or minimised. This book gives a contrary view. It details the exaggeration of adverse selection in insurers' rhetoric and insurance economics, and presents evidence that in many insurance markets, adverse selection is weaker than most commentators suggest. A novel arithmetical argument shows that from a public policy perspective, 'weak' adverse selection can be a good thing. This is because a degree of adverse selection is needed to maximise 'loss coverage', the expected fraction of the population's losses which is compensated by insurance. This book will be valuable for those interested in public policy arguments about insurance and discrimination: academics (in economics, law and social policy), policymakers, actuaries, underwriters, disability activists, geneticists and other medical professionals.
Explosive growth in computing power has made Bayesian methods for infinite-dimensional models - Bayesian nonparametrics - a nearly universal framework for inference, finding practical use in numerous subject areas. Written by leading researchers, this authoritative text draws on theoretical advances of the past twenty years to synthesize all aspects of Bayesian nonparametrics, from prior construction to computation and large sample behavior of posteriors. Because understanding the behavior of posteriors is critical to selecting priors that work, the large sample theory is developed systematically, illustrated by various examples of model and prior combinations. Precise sufficient conditions are given, with complete proofs, that ensure desirable posterior properties and behavior. Each chapter ends with historical notes and numerous exercises to deepen and consolidate the reader's understanding, making the book valuable for both graduate students and researchers in statistics and machine learning, as well as in application areas such as econometrics and biostatistics.
One of the major problems of macroeconomic theory is the way in which the people exchange goods in decentralized market economies. There are major disagreements among macroeconomists regarding tools to influence required outcomes. Since the mainstream efficient market theory fails to provide an internal coherent framework, there is a need for an alternative theory. The book provides an innovative approach for the analysis of agent based models, populated by the heterogeneous and interacting agents in the field of financial fragility. The text is divided in two parts; the first presents analytical developments of stochastic aggregation and macro-dynamics inference methods. The second part introduces macroeconomic models of financial fragility for complex systems populated by heterogeneous and interacting agents. The concepts of financial fragility and macroeconomic dynamics are explained in detail in separate chapters. The statistical physics approach is applied to explain theories of macroeconomic modelling and inference.
Statistics for Business is meant as a textbook for students in business, computer science, bioengineering, environmental technology, and mathematics. In recent years, business statistics is used widely for decision making in business endeavours. It emphasizes statistical applications, statistical model building, and determining the manual solution methods. Special Features: This text is prepared based on "self-taught" method. For most of the methods, the required algorithm is clearly explained using flow-charting methodology. More than 200 solved problems provided. More than 175 end-of-chapter exercises with answers are provided. This allows teachers ample flexibility in adopting the textbook to their individual class plans. This textbook is meant to for beginners and advanced learners as a text in Statistics for Business or Applied Statistics for undergraduate and graduate students.
Davidson and MacKinnon have written an outstanding textbook for graduates in econometrics, covering both basic and advanced topics and using geometrical proofs throughout for clarity of exposition. The book offers a unified theoretical perspective, and emphasizes the practical applications of modern theory.
Originally published in 1931, this book was written to provide actuarial students with a guide to mathematics, with information on elementary trigonometry, finite differences, summation, differential and integral calculus, and probability. Examples are included throughout. This book will be of value to anyone with an interest in actuarial practice and its relationship with aspects of mathematics.
This introductory textbook for business statistics teaches statistical analysis and research methods via business case studies and financial data using Excel, Minitab, and SAS. Every chapter in this textbook engages the reader with data of individual stock, stock indices, options, and futures. One studies and uses statistics to learn how to study, analyze, and understand a data set of particular interest. Some of the more popular statistical programs that have been developed to use statistical and computational methods to analyze data sets are SAS, SPSS, and Minitab. Of those, we look at Minitab and SAS in this textbook. One of the main reasons to use Minitab is that it is the easiest to use among the popular statistical programs. We look at SAS because it is the leading statistical package used in industry. We also utilize the much less costly and ubiquitous Microsoft Excel to do statistical analysis, as the benefits of Excel have become widely recognized in the academic world and its analytical capabilities extend to about 90 percent of statistical analysis done in the business world. We demonstrate much of our statistical analysis using Excel and double check the analysis and outcomes using Minitab and SAS-also helpful in some analytical methods not possible or practical to do in Excel.
Statistical Programming in SAS Second Edition provides a foundation for programming to implement statistical solutions using SAS, a system that has been used to solve data analytic problems for more than 40 years. The author includes motivating examples to inspire readers to generate programming solutions. Upper-level undergraduates, beginning graduate students, and professionals involved in generating programming solutions for data-analytic problems will benefit from this book. The ideal background for a reader is some background in regression modeling and introductory experience with computer programming. The coverage of statistical programming in the second edition includes Getting data into the SAS system, engineering new features, and formatting variables Writing readable and well-documented code Structuring, implementing, and debugging programs that are well documented Creating solutions to novel problems Combining data sources, extracting parts of data sets, and reshaping data sets as needed for other analyses Generating general solutions using macros Customizing output Producing insight-inspiring data visualizations Parsing, processing, and analyzing text Programming solutions using matrices and connecting to R Processing text Programming with matrices Connecting SAS with R Covering topics that are part of both base and certification exams.
Originally published in 1954, on behalf of the National Institute of Economic and Social Research, this book presents a general review of British economic statistics in relation to the uses made of them for policy purposes. The text begins with an examination, in general terms, of the ways in which statistics can help in guiding or assessing policy, covering housing, coal, the development areas, agricultural price-fixing, the balance of external payments and the balance of the economy. The problems of statistical application are then separately discussed under the headings of quality, presentation and availability, and organization. A full bibliography and reference table of principal British economic statistics are also included. This book will be of value to anyone with an interest in British economic history and statistics.
This volume deals with two complementary topics. On one hand the book deals with the problem of determining the the probability distribution of a positive compound random variable, a problem which appears in the banking and insurance industries, in many areas of operational research and in reliability problems in the engineering sciences. On the other hand, the methodology proposed to solve such problems, which is based on an application of the maximum entropy method to invert the Laplace transform of the distributions, can be applied to many other problems. The book contains applications to a large variety of problems, including the problem of dependence of the sample data used to estimate empirically the Laplace transform of the random variable. Contents Introduction Frequency models Individual severity models Some detailed examples Some traditional approaches to the aggregation problem Laplace transforms and fractional moment problems The standard maximum entropy method Extensions of the method of maximum entropy Superresolution in maxentropic Laplace transform inversion Sample data dependence Disentangling frequencies and decompounding losses Computations using the maxentropic density Review of statistical procedures
Originally published in 1939, this book forms the first part of a two-volume series on the mathematics required for the examinations of the Institute of Actuaries, focusing on elementary differential and integral calculus. Miscellaneous examples are included at the end of the text. This book will be of value to anyone with an interest in actuarial science and mathematics.
This lively book lays out a methodology of confidence distributions and puts them through their paces. Among other merits, they lead to optimal combinations of confidence from different sources of information, and they can make complex models amenable to objective and indeed prior-free analysis for less subjectively inclined statisticians. The generous mixture of theory, illustrations, applications and exercises is suitable for statisticians at all levels of experience, as well as for data-oriented scientists. Some confidence distributions are less dispersed than their competitors. This concept leads to a theory of risk functions and comparisons for distributions of confidence. Neyman-Pearson type theorems leading to optimal confidence are developed and richly illustrated. Exact and optimal confidence distribution is the gold standard for inferred epistemic distributions. Confidence distributions and likelihood functions are intertwined, allowing prior distributions to be made part of the likelihood. Meta-analysis in likelihood terms is developed and taken beyond traditional methods, suiting it in particular to combining information across diverse data sources.
Originally published in 1930, this book was formed from the content of three lectures delivered at London University during March of that year. The text provides a concise discussion of the relationship between theoretical statistics and actuarial science. This book will be of value to anyone with an interest in the actuarial profession, statistics and the history of finance.
This book presents the latest advances in the theory and practice of Marshall-Olkin distributions. These distributions have been increasingly applied in statistical practice in recent years, as they make it possible to describe interesting features of stochastic models like non-exchangeability, tail dependencies and the presence of a singular component. The book presents cutting-edge contributions in this research area, with a particular emphasis on financial and economic applications. It is recommended for researchers working in applied probability and statistics, as well as for practitioners interested in the use of stochastic models in economics. This volume collects selected contributions from the conference “Marshall-Olkin Distributions: Advances in Theory and Applications,†held in Bologna on October 2-3, 2013.
Originally published in 1932, as part of the Institute of Actuaries Students' Society's Consolidation of Reading Series, this book was written to provide actuarial students with a guide 'to bridging the gap between the strict mathematics of life contingencies and the severely practical problems of Life Office Valuations'. This book will be of value to anyone with an interest in the actuarial profession and the history of finance.
The main objective of this book is to develop a strategy and policy measures to enhance the formalization of the shadow economy in order to improve the competitiveness of the economy and contribute to economic growth; it explores these issues with special reference to Serbia. The size and development of the shadow economy in Serbia and other Central and Eastern European countries are estimated using two different methods (the MIMIC method and household-tax-compliance method). Micro-estimates are based on a special survey of business entities in Serbia, which for the first time allows us to explore the shadow economy from the perspective of enterprises and entrepreneurs. The authors identify the types of shadow economy at work in business entities, the determinants of shadow economy participation, and the impact of competition from the informal sector on businesses. Readers will learn both about the potential fiscal effects of reducing the shadow economy to the levels observed in more developed countries and the effects that formalization of the shadow economy can have on economic growth.
Inequality is a charged topic. Measures of income inequality rose in the USA in the 1990s to levels not seen since 1929 and gave rise to a suspicion, not for the first time, of a link between radical inequality and financial instability with a resulting crisis under capitalism. Professional macroeconomists have generally taken little interest in inequality because, within the parameters of traditional economic theory, the economy will stabilize itself at full employment. In addition, enlightened economists could enact stabilizing measures to manage any imbalances. The dominant voices among academic economists were unable to interpret the causal forces at work during both the Great Depression and the recent global financial crisis. In Inequality and Instability, James K. Galbraith argues that since there has been no serious work done on the macroeconomic effects of inequality, new sources of evidence are required. Galbraith offers for the first time a vast expansion of the capacity to calculate measures of inequality both at lower and higher levels of aggregation. Instead of measuring inequality as traditionally done, by country, Galbraith insists that to understand real differences that have real effects, inequality must be examined through both smaller and larger administrative units, like sub-national levels within and between states and provinces, multinational continental economies, and the world. He points out that inequality could be captured by measures across administrative boundaries to capture data on more specific groups to which people belong. For example, in China, economic inequality reflects the difference in average income levels between city and countryside, or between coastal regions and the interior, and a simple ratio averages would be an indicator of trends in inequality over the country as a whole. In a comprehensive presentation of this new method of using data, Inequality and Instability offers an unequaled look at the US economy and various global economies that was not accessible to us before. This provides a more sophisticated and a more accurate picture of inequality around the world, and how inequality is one of the most basic sources of economic instability.
Pioneered by American economist Paul Samuelson, revealed preference theory is based on the idea that the preferences of consumers are revealed in their purchasing behavior. Researchers in this field have developed complex and sophisticated mathematical models to capture the preferences that are 'revealed' through consumer choice behavior. This study of consumer demand and behavior is closely tied up with econometrics (especially nonparametric econometrics), where testing the validity of different theoretical models is an important aspect of research. The theory of revealed preference has a very long and distinguished tradition in economics, but there was no systematic presentation of the theory until now. This book deals with basic questions in economic theory, such as the relation between theory and data, and studies the situations in which empirical observations are consistent or inconsistent with some of the best known theories in economics.
In recent years nonlinearities have gained increasing importance in economic and econometric research, particularly after the financial crisis and the economic downturn after 2007. This book contains theoretical, computational and empirical papers that incorporate nonlinearities in econometric models and apply them to real economic problems. It intends to serve as an inspiration for researchers to take potential nonlinearities in account. Researchers should be aware of applying linear model-types spuriously to problems which include non-linear features. It is indispensable to use the correct model type in order to avoid biased recommendations for economic policy.
This book deals with the application of wavelet and spectral methods for the analysis of nonlinear and dynamic processes in economics and finance. It reflects some of the latest developments in the area of wavelet methods applied to economics and finance. The topics include business cycle analysis, asset prices, financial econometrics, and forecasting. An introductory paper by James Ramsey, providing a personal retrospective of a decade's research on wavelet analysis, offers an excellent overview over the field. |
You may like...
Bilingualism - An Advanced Resource Book
Ng Bee Chin, Gillian Wigglesworth
Hardcover
R4,522
Discovery Miles 45 220
|