![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics
This introductory textbook for business statistics teaches statistical analysis and research methods via business case studies and financial data using Excel, Minitab, and SAS. Every chapter in this textbook engages the reader with data of individual stock, stock indices, options, and futures. One studies and uses statistics to learn how to study, analyze, and understand a data set of particular interest. Some of the more popular statistical programs that have been developed to use statistical and computational methods to analyze data sets are SAS, SPSS, and Minitab. Of those, we look at Minitab and SAS in this textbook. One of the main reasons to use Minitab is that it is the easiest to use among the popular statistical programs. We look at SAS because it is the leading statistical package used in industry. We also utilize the much less costly and ubiquitous Microsoft Excel to do statistical analysis, as the benefits of Excel have become widely recognized in the academic world and its analytical capabilities extend to about 90 percent of statistical analysis done in the business world. We demonstrate much of our statistical analysis using Excel and double check the analysis and outcomes using Minitab and SAS-also helpful in some analytical methods not possible or practical to do in Excel.
Statistical Programming in SAS Second Edition provides a foundation for programming to implement statistical solutions using SAS, a system that has been used to solve data analytic problems for more than 40 years. The author includes motivating examples to inspire readers to generate programming solutions. Upper-level undergraduates, beginning graduate students, and professionals involved in generating programming solutions for data-analytic problems will benefit from this book. The ideal background for a reader is some background in regression modeling and introductory experience with computer programming. The coverage of statistical programming in the second edition includes Getting data into the SAS system, engineering new features, and formatting variables Writing readable and well-documented code Structuring, implementing, and debugging programs that are well documented Creating solutions to novel problems Combining data sources, extracting parts of data sets, and reshaping data sets as needed for other analyses Generating general solutions using macros Customizing output Producing insight-inspiring data visualizations Parsing, processing, and analyzing text Programming solutions using matrices and connecting to R Processing text Programming with matrices Connecting SAS with R Covering topics that are part of both base and certification exams.
Originally published in 1954, on behalf of the National Institute of Economic and Social Research, this book presents a general review of British economic statistics in relation to the uses made of them for policy purposes. The text begins with an examination, in general terms, of the ways in which statistics can help in guiding or assessing policy, covering housing, coal, the development areas, agricultural price-fixing, the balance of external payments and the balance of the economy. The problems of statistical application are then separately discussed under the headings of quality, presentation and availability, and organization. A full bibliography and reference table of principal British economic statistics are also included. This book will be of value to anyone with an interest in British economic history and statistics.
Originally published in 1939, this book forms the first part of a two-volume series on the mathematics required for the examinations of the Institute of Actuaries, focusing on elementary differential and integral calculus. Miscellaneous examples are included at the end of the text. This book will be of value to anyone with an interest in actuarial science and mathematics.
This volume deals with two complementary topics. On one hand the book deals with the problem of determining the the probability distribution of a positive compound random variable, a problem which appears in the banking and insurance industries, in many areas of operational research and in reliability problems in the engineering sciences. On the other hand, the methodology proposed to solve such problems, which is based on an application of the maximum entropy method to invert the Laplace transform of the distributions, can be applied to many other problems. The book contains applications to a large variety of problems, including the problem of dependence of the sample data used to estimate empirically the Laplace transform of the random variable. Contents Introduction Frequency models Individual severity models Some detailed examples Some traditional approaches to the aggregation problem Laplace transforms and fractional moment problems The standard maximum entropy method Extensions of the method of maximum entropy Superresolution in maxentropic Laplace transform inversion Sample data dependence Disentangling frequencies and decompounding losses Computations using the maxentropic density Review of statistical procedures
This lively book lays out a methodology of confidence distributions and puts them through their paces. Among other merits, they lead to optimal combinations of confidence from different sources of information, and they can make complex models amenable to objective and indeed prior-free analysis for less subjectively inclined statisticians. The generous mixture of theory, illustrations, applications and exercises is suitable for statisticians at all levels of experience, as well as for data-oriented scientists. Some confidence distributions are less dispersed than their competitors. This concept leads to a theory of risk functions and comparisons for distributions of confidence. Neyman-Pearson type theorems leading to optimal confidence are developed and richly illustrated. Exact and optimal confidence distribution is the gold standard for inferred epistemic distributions. Confidence distributions and likelihood functions are intertwined, allowing prior distributions to be made part of the likelihood. Meta-analysis in likelihood terms is developed and taken beyond traditional methods, suiting it in particular to combining information across diverse data sources.
Originally published in 1930, this book was formed from the content of three lectures delivered at London University during March of that year. The text provides a concise discussion of the relationship between theoretical statistics and actuarial science. This book will be of value to anyone with an interest in the actuarial profession, statistics and the history of finance.
This book presents the latest advances in the theory and practice of Marshall-Olkin distributions. These distributions have been increasingly applied in statistical practice in recent years, as they make it possible to describe interesting features of stochastic models like non-exchangeability, tail dependencies and the presence of a singular component. The book presents cutting-edge contributions in this research area, with a particular emphasis on financial and economic applications. It is recommended for researchers working in applied probability and statistics, as well as for practitioners interested in the use of stochastic models in economics. This volume collects selected contributions from the conference “Marshall-Olkin Distributions: Advances in Theory and Applications,†held in Bologna on October 2-3, 2013.
Originally published in 1932, as part of the Institute of Actuaries Students' Society's Consolidation of Reading Series, this book was written to provide actuarial students with a guide 'to bridging the gap between the strict mathematics of life contingencies and the severely practical problems of Life Office Valuations'. This book will be of value to anyone with an interest in the actuarial profession and the history of finance.
The main objective of this book is to develop a strategy and policy measures to enhance the formalization of the shadow economy in order to improve the competitiveness of the economy and contribute to economic growth; it explores these issues with special reference to Serbia. The size and development of the shadow economy in Serbia and other Central and Eastern European countries are estimated using two different methods (the MIMIC method and household-tax-compliance method). Micro-estimates are based on a special survey of business entities in Serbia, which for the first time allows us to explore the shadow economy from the perspective of enterprises and entrepreneurs. The authors identify the types of shadow economy at work in business entities, the determinants of shadow economy participation, and the impact of competition from the informal sector on businesses. Readers will learn both about the potential fiscal effects of reducing the shadow economy to the levels observed in more developed countries and the effects that formalization of the shadow economy can have on economic growth.
Pioneered by American economist Paul Samuelson, revealed preference theory is based on the idea that the preferences of consumers are revealed in their purchasing behavior. Researchers in this field have developed complex and sophisticated mathematical models to capture the preferences that are 'revealed' through consumer choice behavior. This study of consumer demand and behavior is closely tied up with econometrics (especially nonparametric econometrics), where testing the validity of different theoretical models is an important aspect of research. The theory of revealed preference has a very long and distinguished tradition in economics, but there was no systematic presentation of the theory until now. This book deals with basic questions in economic theory, such as the relation between theory and data, and studies the situations in which empirical observations are consistent or inconsistent with some of the best known theories in economics.
In recent years nonlinearities have gained increasing importance in economic and econometric research, particularly after the financial crisis and the economic downturn after 2007. This book contains theoretical, computational and empirical papers that incorporate nonlinearities in econometric models and apply them to real economic problems. It intends to serve as an inspiration for researchers to take potential nonlinearities in account. Researchers should be aware of applying linear model-types spuriously to problems which include non-linear features. It is indispensable to use the correct model type in order to avoid biased recommendations for economic policy.
This book deals with the application of wavelet and spectral methods for the analysis of nonlinear and dynamic processes in economics and finance. It reflects some of the latest developments in the area of wavelet methods applied to economics and finance. The topics include business cycle analysis, asset prices, financial econometrics, and forecasting. An introductory paper by James Ramsey, providing a personal retrospective of a decade's research on wavelet analysis, offers an excellent overview over the field.
Designed for a one-semester course, Applied Statistics for Business and Economics offers students in business and the social sciences an effective introduction to some of the most basic and powerful techniques available for understanding their world. Numerous interesting and important examples reflect real-life situations, stimulating students to think realistically in tackling these problems. Calculations can be performed using any standard spreadsheet package. To help with the examples, the author offers both actual and hypothetical databases on his website http://iwu.edu/~bleekley The text explores ways to describe data and the relationships found in data. It covers basic probability tools, Bayes' theorem, sampling, estimation, and confidence intervals. The text also discusses hypothesis testing for one and two samples, contingency tables, goodness-of-fit, analysis of variance, and population variances. In addition, the author develops the concepts behind the linear relationship between two numeric variables (simple regression) as well as the potentially nonlinear relationships among more than two variables (multiple regression). The final chapter introduces classical time-series analysis and how it applies to business and economics. This text provides a practical understanding of the value of statistics in the real world. After reading the book, students will be able to summarize data in insightful ways using charts, graphs, and summary statistics as well as make inferences from samples, especially about relationships.
​This publication provides insight into the agricultural sector. It illustrates new tendencies in agricultural economics and dynamics (interrelationship with other sectors in rural zones and multifunctionality) and the implications of the World Trade Organization negotiations in the international trade of agricultural products. Due to environmental problems, availability of budget, consumer preferences for food safety and pressure from the World Trade Organization, there are many changes in the agricultural sector. This book addresses those new developments and provides insights into possible future developments. The agricultural activity is an economic sector that is fundamental for a sustainable economic growth of every country. However, this sector has many particularities, namely those related with some structural problems (many farms with reduced dimension, sometimes lack of vocational training of the farmers, difficulties of put the farmers together in associations and cooperatives), variations of the productions and prices over the year and some environmental problems derived from the utilization of pesticides and fertilizers.
In the era of Big Data our society is given the unique opportunity to understand the inner dynamics and behavior of complex socio-economic systems. Advances in the availability of very large databases, in capabilities for massive data mining, as well as progress in complex systems theory, multi-agent simulation and computational social science open the possibility of modeling phenomena never before successfully achieved. This contributed volume from the Perm Winter School address the problems of the mechanisms and statistics of the socio-economics system evolution with a focus on financial markets powered by the high-frequency data analysis.
This book provides a detailed introduction to the theoretical and methodological foundations of production efficiency analysis using benchmarking. Two of the more popular methods of efficiency evaluation are Stochastic Frontier Analysis (SFA) and Data Envelopment Analysis (DEA), both of which are based on the concept of a production possibility set and its frontier. Depending on the assumed objectives of the decision-making unit, a Production, Cost, or Profit Frontier is constructed from observed data on input and output quantities and prices. While SFA uses different maximum likelihood estimation techniques to estimate a parametric frontier, DEA relies on mathematical programming to create a nonparametric frontier. Yet another alternative is the Convex Nonparametric Frontier, which is based on the assumed convexity of the production possibility set and creates a piecewise linear frontier consisting of a number of tangent hyper planes. Three of the papers in this volume provide a detailed and relatively easy to follow exposition of the underlying theory from neoclassical production economics and offer step-by-step instructions on the appropriate model to apply in different contexts and how to implement them. Of particular appeal are the instructions on (i) how to write the codes for different SFA models on STATA, (ii) how to write a VBA Macro for repetitive solution of the DEA problem for each production unit on Excel Solver, and (iii) how to write the codes for the Nonparametric Convex Frontier estimation. The three other papers in the volume are primarily theoretical and will be of interest to PhD students and researchers hoping to make methodological and conceptual contributions to the field of nonparametric efficiency analysis.
Economists can use computer algebra systems to manipulate symbolic models, derive numerical computations, and analyze empirical relationships among variables. Maxima is an open-source multi-platform computer algebra system that rivals proprietary software. Maxima's symbolic and computational capabilities enable economists and financial analysts to develop a deeper understanding of models by allowing them to explore the implications of differences in parameter values, providing numerical solutions to problems that would be otherwise intractable, and by providing graphical representations that can guide analysis. This book provides a step-by-step tutorial for using this program to examine the economic relationships that form the core of microeconomics in a way that complements traditional modeling techniques. Readers learn how to phrase the relevant analysis and how symbolic expressions, numerical computations, and graphical representations can be used to learn from microeconomic models. In particular, comparative statics analysis is facilitated. Little has been published on Maxima and its applications in economics and finance, and this volume will appeal to advanced undergraduates, graduate-level students studying microeconomics, academic researchers in economics and finance, economists, and financial analysts.
From the Introduction: This volume is dedicated to the remarkable career of Professor Peter Schmidt and the role he has played in mentoring us, his PhD students. Peter's accomplishments are legendary among his students and the profession. Each of the papers in this Festschrift is a research work executed by a former PhD student of Peter's, from his days at the University of North Carolina at Chapel Hill to his time at Michigan State University. Most of the papers were presented at The Conference in Honor of Peter Schmidt, June 30 - July 2, 2011. The conference was largely attended by his former students and one current student, who traveled from as far as Europe and Asia to honor Peter. This was a conference to celebrate Peter's contribution to our contributions. By "our contributions" we mean the research papers that make up this Festschrift and the countless other publications by his students represented and not represented in this volume. Peter's students may have their families to thank for much that is positive in their lives. However, if we think about it, our professional lives would not be the same without the lessons and the approaches to decision making that we learned from Peter. We spent our days together at Peter's conference and the months since reminded of these aspects of our personalities and life goals that were enhanced, fostered, and nurtured by the very singular experiences we have had as Peter's students. We recognized in 2011 that it was unlikely we would all be together again to celebrate such a wonderful moment in ours and Peter's lives and pledged then to take full advantage of it. We did then, and we are now in the form of this volume.
Developed from the author's course on Monte Carlo simulation at Brown University, Monte Carlo Simulation with Applications to Finance provides a self-contained introduction to Monte Carlo methods in financial engineering. It is suitable for advanced undergraduate and graduate students taking a one-semester course or for practitioners in the financial industry. The author first presents the necessary mathematical tools for simulation, arbitrary free option pricing, and the basic implementation of Monte Carlo schemes. He then describes variance reduction techniques, including control variates, stratification, conditioning, importance sampling, and cross-entropy. The text concludes with stochastic calculus and the simulation of diffusion processes. Only requiring some familiarity with probability and statistics, the book keeps much of the mathematics at an informal level and avoids technical measure-theoretic jargon to provide a practical understanding of the basics. It includes a large number of examples as well as MATLAB (R) coding exercises that are designed in a progressive manner so that no prior experience with MATLAB is needed.
This ambitious book looks 'behind the model' to reveal how economists use formal models to generate insights into the economy. Drawing on recent work in the philosophy of science and economic methodology, the book presents a novel framework for understanding the logic of economic modeling. It also reveals the ways in which economic models can mislead rather than illuminate. Importantly, the book goes beyond purely negative critique, proposing a concrete program of methodological reform to better equip economists to detect potential mismatches between their models and the targets of their inquiry. Ranging across economics, philosophy, and social science methods, and drawing on a variety of examples, including the recent financial crisis, Behind the Model will be of interest to anyone who has wondered how economics works - and why it sometimes fails so spectacularly.
This ambitious book looks 'behind the model' to reveal how economists use formal models to generate insights into the economy. Drawing on recent work in the philosophy of science and economic methodology, the book presents a novel framework for understanding the logic of economic modeling. It also reveals the ways in which economic models can mislead rather than illuminate. Importantly, the book goes beyond purely negative critique, proposing a concrete program of methodological reform to better equip economists to detect potential mismatches between their models and the targets of their inquiry. Ranging across economics, philosophy, and social science methods, and drawing on a variety of examples, including the recent financial crisis, Behind the Model will be of interest to anyone who has wondered how economics works - and why it sometimes fails so spectacularly.
The individual risks faced by banks, insurers, and marketers are less well understood than aggregate risks such as market-price changes. But the risks incurred or carried by individual people, companies, insurance policies, or credit agreements can be just as devastating as macroevents such as share-price fluctuations. A comprehensive introduction, The Econometrics of Individual Risk is the first book to provide a complete econometric methodology for quantifying and managing this underappreciated but important variety of risk. The book presents a course in the econometric theory of individual risk illustrated by empirical examples. And, unlike other texts, it is focused entirely on solving the actual individual risk problems businesses confront today. Christian Gourieroux and Joann Jasiak emphasize the microeconometric aspect of risk analysis by extensively discussing practical problems such as retail credit scoring, credit card transaction dynamics, and profit maximization in promotional mailing. They address regulatory issues in sections on computing the minimum capital reserve for coverage of potential losses, and on the credit-risk measure CreditVar. The book will interest graduate students in economics, business, finance, and actuarial studies, as well as actuaries and financial analysts.
Many economic theories depend on the presence or absence of a unit root for their validity, and econometric and statistical theory undergo considerable changes when unit roots are present. Thus, knowledge on unit roots has become so important, necessitating an extensive, compact, and nontechnical book on this subject. This book is rested on this motivation and introduces the literature on unit roots in a comprehensive manner to both empirical and theoretical researchers in economics and other areas. By providing a clear, complete, and critical discussion of unit root literature, In Choi covers a wide range of topics, including uniform confidence interval construction, unit root tests allowing structural breaks, mildly explosive processes, exuberance testing, fractionally integrated processes, seasonal unit roots and panel unit root testing. Extensive, up to date, and readily accessible, this book is a comprehensive reference source on unit roots for both students and applied workers.
Three leading experts have produced a landmark work based on a set of working papers published by the Center for Operations Research and Econometrics (CORE) at the Universite Catholique de Louvain in 1994 under the title 'Repeated Games', which holds almost mythic status among game theorists. Jean-Francois Mertens, Sylvain Sorin and Shmuel Zamir have significantly elevated the clarity and depth of presentation with many results presented at a level of generality that goes far beyond the original papers - many written by the authors themselves. Numerous results are new, and many classic results and examples are not to be found elsewhere. Most remain state of the art in the literature. This book is full of challenging and important problems that are set up as exercises, with detailed hints provided for their solutions. A new bibliography traces the development of the core concepts up to the present day. |
You may like...
Operations and Supply Chain Management
James Evans, David Collier
Hardcover
The Oxford Handbook of the Economics of…
Yann Bramoulle, Andrea Galeotti, …
Hardcover
R5,455
Discovery Miles 54 550
Introductory Econometrics - A Modern…
Jeffrey Wooldridge
Hardcover
Design and Analysis of Time Series…
Richard McCleary, David McDowall, …
Hardcover
R3,286
Discovery Miles 32 860
|