![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > Economic statistics
This introductory textbook for business statistics teaches statistical analysis and research methods via business case studies and financial data using Excel, Minitab, and SAS. Every chapter in this textbook engages the reader with data of individual stock, stock indices, options, and futures. One studies and uses statistics to learn how to study, analyze, and understand a data set of particular interest. Some of the more popular statistical programs that have been developed to use statistical and computational methods to analyze data sets are SAS, SPSS, and Minitab. Of those, we look at Minitab and SAS in this textbook. One of the main reasons to use Minitab is that it is the easiest to use among the popular statistical programs. We look at SAS because it is the leading statistical package used in industry. We also utilize the much less costly and ubiquitous Microsoft Excel to do statistical analysis, as the benefits of Excel have become widely recognized in the academic world and its analytical capabilities extend to about 90 percent of statistical analysis done in the business world. We demonstrate much of our statistical analysis using Excel and double check the analysis and outcomes using Minitab and SAS-also helpful in some analytical methods not possible or practical to do in Excel.
This text takes an integrated approach, and places emphasis on modeling and the application of pure methods rather than statistical techniques. This emphasis allows readers to learn how to solve business problems, not mathematical equations, and prepares them for their role as decision makers. All models and analyses in the book use Excel so readers make decisions without having to complete difficult calculations. The book is also accompanied by KaddStat, an easy-to-use add-in to Excel, which makes it easier to run complex statistical tests on Excel.
This volume deals with two complementary topics. On one hand the book deals with the problem of determining the the probability distribution of a positive compound random variable, a problem which appears in the banking and insurance industries, in many areas of operational research and in reliability problems in the engineering sciences. On the other hand, the methodology proposed to solve such problems, which is based on an application of the maximum entropy method to invert the Laplace transform of the distributions, can be applied to many other problems. The book contains applications to a large variety of problems, including the problem of dependence of the sample data used to estimate empirically the Laplace transform of the random variable. Contents Introduction Frequency models Individual severity models Some detailed examples Some traditional approaches to the aggregation problem Laplace transforms and fractional moment problems The standard maximum entropy method Extensions of the method of maximum entropy Superresolution in maxentropic Laplace transform inversion Sample data dependence Disentangling frequencies and decompounding losses Computations using the maxentropic density Review of statistical procedures
This book is an introduction to regression analysis, focusing on the practicalities of doing regression analysis on real-life data. Contrary to other textbooks on regression, this book is based on the idea that you do not necessarily need to know much about statistics and mathematics to get a firm grip on regression and perform it to perfection. This non-technical point of departure is complemented by practical examples of real-life data analysis using statistics software such as Stata, R and SPSS. Parts 1 and 2 of the book cover the basics, such as simple linear regression, multiple linear regression, how to interpret the output from statistics programs, significance testing and the key regression assumptions. Part 3 deals with how to practically handle violations of the classical linear regression assumptions, regression modeling for categorical y-variables and instrumental variable (IV) regression. Part 4 puts the various purposes of, or motivations for, regression into the wider context of writing a scholarly report and points to some extensions to related statistical techniques. This book is written primarily for those who need to do regression analysis in practice, and not only to understand how this method works in theory. The book's accessible approach is recommended for students from across the social sciences.
Social media has made charts, infographics and diagrams ubiquitous-and easier to share than ever. While such visualisations can better inform us, they can also deceive by displaying incomplete or inaccurate data, suggesting misleading patterns-or misinform by being poorly designed. Many of us are ill equipped to interpret the visuals that politicians, journalists, advertisers and even employers present each day, enabling bad actors to easily manipulate visuals to promote their own agendas. Public conversations are increasingly driven by numbers and to make sense of them, we must be able to decode and use visual information. By examining contemporary examples ranging from election-result infographics to global GDP maps and box-office record charts, How Charts Lie teaches us how to do just that.
Microeconometrics Using Stata, Second Edition is an invaluable reference for researchers and students interested in applied microeconometric methods. Like previous editions, this text covers all the classic microeconometric techniques ranging from linear models to instrumental-variables regression to panel-data estimation to nonlinear models such as probit, tobit, Poisson, and choice models. Each of these discussions has been updated to show the most modern implementation in Stata, and many include additional explanation of the underlying methods. In addition, the authors introduce readers to performing simulations in Stata and then use simulations to illustrate methods in other parts of the book. They even teach you how to code your own estimators in Stata. The second edition is greatly expanded—the new material is so extensive that the text now comprises two volumes. In addition to the classics, the book now teaches recently developed econometric methods and the methods newly added to Stata. Specifically, the book includes entirely new chapters on duration models randomized control trials and exogenous treatment effects endogenous treatment effects models for endogeneity and heterogeneity, including finite mixture models, structural equation models, and nonlinear mixed-effects models spatial autoregressive models semiparametric regression lasso for prediction and inference Bayesian analysis Anyone interested in learning classic and modern econometric methods will find this the perfect companion. And those who apply these methods to their own data will return to this reference over and over as they need to implement the various techniques described in this book.
Technical Analysis of Stock Trends helps investors make smart, profitable trading decisions by providing proven long- and short-term stock trend analysis. It gets right to the heart of effective technical trading concepts, explaining technical theory such as The Dow Theory, reversal patterns, consolidation formations, trends and channels, technical analysis of commodity charts, and advances in investment technology. It also includes a comprehensive guide to trading tactics from long and short goals, stock selection, charting, low and high risk, trend recognition tools, balancing and diversifying the stock portfolio, application of capital, and risk management. This updated new edition includes patterns and modifiable charts that are tighter and more illustrative. Expanded material is also included on Pragmatic Portfolio Theory as a more elegant alternative to Modern Portfolio Theory; and a newer, simpler, and more powerful alternative to Dow Theory is presented. This book is the perfect introduction, giving you the knowledge and wisdom to craft long-term success.
Spatial Econometrics provides a modern, powerful and flexible skillset to early career researchers interested in entering this rapidly expanding discipline. It articulates the principles and current practice of modern spatial econometrics and spatial statistics, combining rigorous depth of presentation with unusual depth of coverage. Introducing and formalizing the principles of, and 'need' for, models which define spatial interactions, the book provides a comprehensive framework for almost every major facet of modern science. Subjects covered at length include spatial regression models, weighting matrices, estimation procedures and the complications associated with their use. The work particularly focuses on models of uncertainty and estimation under various complications relating to model specifications, data problems, tests of hypotheses, along with systems and panel data extensions which are covered in exhaustive detail. Extensions discussing pre-test procedures and Bayesian methodologies are provided at length. Throughout, direct applications of spatial models are described in detail, with copious illustrative empirical examples demonstrating how readers might implement spatial analysis in research projects. Designed as a textbook and reference companion, every chapter concludes with a set of questions for formal or self--study. Finally, the book includes extensive supplementing information in a large sample theory in the R programming language that supports early career econometricians interested in the implementation of statistical procedures covered.
NOW WITH NEW PROLOGUE ABOUT DEMYSTIFYING CORONAVIRUS NUMBERS, DONALD TRUMP AND WHY STATISTICS MATTER MORE THAN EVER 'The Number Bias combines vivid storytelling with authoritative analysis to deliver a warning about the way numbers can lead us astray - if we let them.' TIM HARFORD Even if you don't consider yourself a numbers person, you are a numbers person. The time has come to put numbers in their place. Not high up on a pedestal, or out on the curb, but right where they belong: beside words. It is not an overstatement to say that numbers dictate the way we live our lives. They tell us how we're doing at school, how much we weigh, who might win an election and whether the economy is booming. But numbers aren't as objective as they may seem; behind every number is a story. Yet politicians, businesses and the media often forget this - or use it for their own gain. Sanne Blauw travels the world to unpick our relationship with numbers and demystify our misguided allegiance, from Florence Nightingale using statistics to petition for better conditions during the Crimean War to the manipulation of numbers by the American tobacco industry and the ambiguous figures peddled during the EU referendum. Taking us from the everyday numbers that govern our health and wellbeing to the statistics used to wield enormous power and influence, The Number Bias counsels us to think more wisely. 'A beautifully accessible exploration of how numbers shape our lives, and the importance of accurately interpreting the statistics we are fed.' ANGELA SAINI, author of Superior
The design of trading algorithms requires sophisticated mathematical models backed up by reliable data. In this textbook, the authors develop models for algorithmic trading in contexts such as executing large orders, market making, targeting VWAP and other schedules, trading pairs or collection of assets, and executing in dark pools. These models are grounded on how the exchanges work, whether the algorithm is trading with better informed traders (adverse selection), and the type of information available to market participants at both ultra-high and low frequency. Algorithmic and High-Frequency Trading is the first book that combines sophisticated mathematical modelling, empirical facts and financial economics, taking the reader from basic ideas to cutting-edge research and practice. If you need to understand how modern electronic markets operate, what information provides a trading edge, and how other market participants may affect the profitability of the algorithms, then this is the book for you.
Originally published in 1939, this book forms the second part of a two-volume series on the mathematics required for the examinations of the Institute of Actuaries, focusing on finite differences, probability and elementary statistics. Miscellaneous examples are included at the end of the text. This book will be of value to anyone with an interest in actuarial science and mathematics.
A unique and comprehensive source of information, this book is the only international publication providing economists, planners, policy makers and business people with worldwide statistics on current performance and trends in the manufacturing sector. The Yearbook is designed to facilitate international comparisons relating to manufacturing activity and industrial development and performance. It provides data which can be used to analyze patterns of growth and related long term trends, structural change and industrial performance in individual industries. Statistics on employment patterns, wages, consumption and gross output and other key indicators are also presented. Contents: Introduction Part I: Summary Tables 1.1. The Manufacturing Sector 1.2. The Manufacturing Branches Part II: Country Tables
Quantile regression constitutes an ensemble of statistical techniques intended to estimate and draw inferences about conditional quantile functions. Median regression, as introduced in the 18th century by Boscovich and Laplace, is a special case. In contrast to conventional mean regression that minimizes sums of squared residuals, median regression minimizes sums of absolute residuals; quantile regression simply replaces symmetric absolute loss by asymmetric linear loss. Since its introduction in the 1970's by Koenker and Bassett, quantile regression has been gradually extended to a wide variety of data analytic settings including time series, survival analysis, and longitudinal data. By focusing attention on local slices of the conditional distribution of response variables it is capable of providing a more complete, more nuanced view of heterogeneous covariate effects. Applications of quantile regression can now be found throughout the sciences, including astrophysics, chemistry, ecology, economics, finance, genomics, medicine, and meteorology. Software for quantile regression is now widely available in all the major statistical computing environments. The objective of this volume is to provide a comprehensive review of recent developments of quantile regression methodology illustrating its applicability in a wide range of scientific settings. The intended audience of the volume is researchers and graduate students across a diverse set of disciplines.
Discover the Benefits of Risk Parity Investing Despite recent progress in the theoretical analysis and practical applications of risk parity, many important fundamental questions still need to be answered. Risk Parity Fundamentals uses fundamental, quantitative, and historical analysis to address these issues, such as: What are the macroeconomic dimensions of risk in risk parity portfolios? What are the appropriate risk premiums in a risk parity portfolio? What are market environments in which risk parity might thrive or struggle? What is the role of leverage in a risk parity portfolio? An experienced researcher and portfolio manager who coined the term "risk parity," the author provides investors with a practical understanding of the risk parity investment approach. Investors will gain insight into the merit of risk parity as well as the practical and underlying aspects of risk parity investing.
This volume presents original and up-to-date studies in unobserved components (UC) time series models from both theoretical and methodological perspectives. It also presents empirical studies where the UC time series methodology is adopted. Drawing on the intellectual influence of Andrew Harvey, the work covers three main topics: the theory and methodology for unobserved components time series models; applications of unobserved components time series models; and time series econometrics and estimation and testing. These types of time series models have seen wide application in economics, statistics, finance, climate change, engineering, biostatistics, and sports statistics. The volume effectively provides a key review into relevant research directions for UC time series econometrics and will be of interest to econometricians, time series statisticians, and practitioners (government, central banks, business) in time series analysis and forecasting, as well to researchers and graduate students in statistics, econometrics, and engineering.
This workbook consists of exercises taken from Likelihood-Based
Inferences in Cointegrated Vector Autoregressive Models by Soren
Johansen, together with worked-out solutions.
This is the first textbook designed to teach statistics to students in aviation courses. All examples and exercises are grounded in an aviation context, including flight instruction, air traffic control, airport management, and human factors. Structured in six parts, theiscovers the key foundational topics relative to descriptive and inferential statistics, including hypothesis testing, confidence intervals, z and t tests, correlation, regression, ANOVA, and chi-square. In addition, this book promotes both procedural knowledge and conceptual understanding. Detailed, guided examples are presented from the perspective of conducting a research study. Each analysis technique is clearly explained, enabling readers to understand, carry out, and report results correctly. Students are further supported by a range of pedagogical features in each chapter, including objectives, a summary, and a vocabulary check. Digital supplements comprise downloadable data sets and short video lectures explaining key concepts. Instructors also have access to PPT slides and an instructor’s manual that consists of a test bank with multiple choice exams, exercises with data sets, and solutions. This is the ideal statistics textbook for aviation courses globally, especially in aviation statistics, research methods in aviation, human factors, and related areas.
Who decides how official statistics are produced? Do politicians have control or are key decisions left to statisticians in independent statistical agencies? Interviews with statisticians in Australia, Canada, Sweden, the UK and the USA were conducted to get insider perspectives on the nature of decision making in government statistical administration. While the popular adage suggests there are 'lies, damned lies and statistics', this research shows that official statistics in liberal democracies are far from mistruths; they are consistently insulated from direct political interference. Yet, a range of subtle pressures and tensions exist that governments and statisticians must manage. The power over statistics is distributed differently in different countries, and this book explains why. Differences in decision-making powers across countries are the result of shifting pressures politicians and statisticians face to be credible, and the different national contexts that provide distinctive institutional settings for the production of government numbers.
This edited collection concerns nonlinear economic relations that involve time. It is divided into four broad themes that all reflect the work and methodology of Professor Timo Terasvirta, one of the leading scholars in the field of nonlinear time series econometrics. The themes are: Testing for linearity and functional form, specification testing and estimation of nonlinear time series models in the form of smooth transition models, model selection and econometric methodology, and finally applications within the area of financial econometrics. All these research fields include contributions that represent state of the art in econometrics such as testing for neglected nonlinearity in neural network models, time-varying GARCH and smooth transition models, STAR models and common factors in volatility modeling, semi-automatic general to specific model selection for nonlinear dynamic models, high-dimensional data analysis for parametric and semi-parametric regression models with dependent data, commodity price modeling, financial analysts earnings forecasts based on asymmetric loss function, local Gaussian correlation and dependence for asymmetric return dependence, and the use of bootstrap aggregation to improve forecast accuracy. Each chapter represents original scholarly work, and reflects the intellectual impact that Timo Terasvirta has had and will continue to have, on the profession.
Meta-Regression Analysis in Economics and Business is the first text devoted to the meta-regression analysis (MRA) of economics and business research. The book provides a comprehensive guide to conducting systematic reviews of empirical economics and business research, identifying and explaining the best practices of MRA, and highlighting its problems and pitfalls. These statistical techniques are illustrated using actual data from four published meta-analyses of business and economic research: the effects of unions on productivity, the employment effects of the minimum wage, the value of a statistical life and residential water demand elasticities. While it shares some features in common with these other disciplines, meta-analysis in economics and business faces its own particular challenges and types of research data. This volume guides new researchers from the beginning to the end, from the collection of research to publication of their research. This book will be of great interest to students and researchers in business, economics, marketing, management, and political science, as well as to policy makers.
This book is intended to provide the reader with a firm conceptual and empirical understanding of basic information-theoretic econometric models and methods. Because most data are observational, practitioners work with indirect noisy observations and ill-posed econometric models in the form of stochastic inverse problems. Consequently, traditional econometric methods in many cases are not applicable for answering many of the quantitative questions that analysts wish to ask. After initial chapters deal with parametric and semiparametric linear probability models, the focus turns to solving nonparametric stochastic inverse problems. In succeeding chapters, a family of power divergence measure likelihood functions are introduced for a range of traditional and nontraditional econometric-model problems. Finally, within either an empirical maximum likelihood or loss context, Ron C. Mittelhammer and George G. Judge suggest a basis for choosing a member of the divergence family.
In the future, as our society becomes older and older, an increasing number of people will be confronted with Alzheimer's disease. Some will suffer from the illness themselves, others will see parents, relatives, their spouse or a close friend afflicted by it. Even now, the psychological and financial burden caused by Alzheimer's disease is substantial, most of it borne by the patient and her family. Improving the situation for the patients and their caregivers presents a challenge for societies and decision makers. Our work contributes to improving the in decision making situation con cerning Alzheimer's disease. At a fundamental level, it addresses methodo logical aspects of the contingent valuation method and gives a holistic view of applying the contingent valuation method for use in policy. We show all stages of a contingent valuation study beginning with the design, the choice of elicitation techniques and estimation methods for willingness-to-pay, the use of the results in a cost-benefit analysis, and finally, the policy implica tions resulting from our findings. We do this by evaluating three possible programs dealing with Alzheimer's disease. The intended audience of this book are health economists interested in methodological problems of contin gent valuation studies, people involved in health care decision making, plan ning, and priority setting, as well as people interested in Alzheimer's disease. We would like to thank the many people and institutions who have pro vided their help with this project."
This book contains an accessible discussion examining computationally-intensive techniques and bootstrap methods, providing ways to improve the finite-sample performance of well-known asymptotic tests for regression models. This book uses the linear regression model as a framework for introducing simulation-based tests to help perform econometric analyses.
"Family Spending" provides analysis of household expenditure broken down by age and income, household composition, socio-economic characteristics and geography. This report will be of interest to academics, policy makers, government and the general public. |
You may like...
RF / Microwave Circuit Design for…
Ulrich L. Rohde, Matthias Rudolph
Hardcover
R4,952
Discovery Miles 49 520
Wireless Public Safety Networks Volume 1…
Daniel Camara, Navid Nikaein
Hardcover
R2,674
Discovery Miles 26 740
Handbook of Research on Smart Technology…
J. Joshua Thomas, Ugo Fiore, …
Hardcover
R8,119
Discovery Miles 81 190
Wireless Sensor and Actuator Networks…
Roberto Verdone, Davide Dardari, …
Hardcover
R2,237
Discovery Miles 22 370
Purchasing And Supply Chain Management
Robert Handfield, Larry Giunipero, …
Hardcover
The Former Yugoslavia's Diverse Peoples…
Matjaz Klemencic, Mitja Zagar
Hardcover
R2,296
Discovery Miles 22 960
|