![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > Economic statistics
Most textbooks on regression focus on theory and the simplest of examples. Real statistical problems, however, are complex and subtle. This is not a book about the theory of regression. It is about using regression to solve real problems of comparison, estimation, prediction, and causal inference. Unlike other books, it focuses on practical issues such as sample size and missing data and a wide range of goals and techniques. It jumps right in to methods and computer code you can use immediately. Real examples, real stories from the authors' experience demonstrate what regression can do and its limitations, with practical advice for understanding assumptions and implementing methods for experiments and observational studies. They make a smooth transition to logistic regression and GLM. The emphasis is on computation in R and Stan rather than derivations, with code available online. Graphics and presentation aid understanding of the models and model fitting.
How the obsession with quantifying human performance threatens business, medicine, education, government-and the quality of our lives Today, organizations of all kinds are ruled by the belief that the path to success is quantifying human performance, publicizing the results, and dividing up the rewards based on the numbers. But in our zeal to instill the evaluation process with scientific rigor, we've gone from measuring performance to fixating on measuring itself-and this tyranny of metrics now threatens the quality of our organizations and lives. In this brief, accessible, and powerful book, Jerry Muller uncovers the damage metrics are causing and shows how we can begin to fix the problem. Filled with examples from business, medicine, education, government, and other fields, the book explains why paying for measured performance doesn't work, why surgical scorecards may increase deaths, and much more. But Muller also shows that, when used as a complement to judgment based on personal experience, metrics can be beneficial, and he includes an invaluable checklist of when and how to use them. The result is an essential corrective to a harmful trend that increasingly affects us all.
A Hands-On Approach to Understanding and Using Actuarial Models Computational Actuarial Science with R provides an introduction to the computational aspects of actuarial science. Using simple R code, the book helps you understand the algorithms involved in actuarial computations. It also covers more advanced topics, such as parallel computing and C/C++ embedded codes. After an introduction to the R language, the book is divided into four parts. The first one addresses methodology and statistical modeling issues. The second part discusses the computational facets of life insurance, including life contingencies calculations and prospective life tables. Focusing on finance from an actuarial perspective, the next part presents techniques for modeling stock prices, nonlinear time series, yield curves, interest rates, and portfolio optimization. The last part explains how to use R to deal with computational issues of nonlife insurance. Taking a do-it-yourself approach to understanding algorithms, this book demystifies the computational aspects of actuarial science. It shows that even complex computations can usually be done without too much trouble. Datasets used in the text are available in an R package (CASdatasets).
Prepares readers to analyze data and interpret statistical results using the increasingly popular R more quickly than other texts through LessR extensions which remove the need to program. By introducing R through less R, readers learn how to organize data for analysis, read the data into R, and produce output without performing numerous functions and programming first. Readers can select the necessary procedure and change the relevant variables without programming. Quick Starts introduce readers to the concepts and commands reviewed in the chapters. Margin notes define, illustrate, and cross-reference the key concepts. When readers encounter a term previously discussed, the margin notes identify the page number to the initial introduction. Scenarios highlight the use of a specific analysis followed by the corresponding R/lessR input and an interpretation of the resulting output. Numerous examples of output from psychology, business, education, and other social sciences demonstrate how to interpret results and worked problems help readers test their understanding. www.lessRstats.com website features the lessR program, the book's 2 data sets referenced in standard text and SPSS formats so readers can practice using R/lessR by working through the text examples and worked problems, PDF slides for each chapter, solutions to the book's worked problems, links to R/lessR videos to help readers better understand the program, and more. New to this edition: o upgraded functionality and data visualizations of the lessR package, which is now aesthetically equal to the ggplot 2 R standard o new features to replace and extend previous content, such as aggregating data with pivot tables with a simple lessR function call.
The Oxford Handbook of Panel Data examines new developments in the theory and applications of panel data. It includes basic topics like non-stationary panels, co-integration in panels, multifactor panel models, panel unit roots, measurement error in panels, incidental parameters and dynamic panels, spatial panels, nonparametric panel data, random coefficients, treatment effects, sample selection, count panel data, limited dependent variable panel models, unbalanced panel models with interactive effects and influential observations in panel data. Contributors to the Handbook explore applications of panel data to a wide range of topics in economics, including health, labor, marketing, trade, productivity, and macro applications in panels. This Handbook is an informative and comprehensive guide for both those who are relatively new to the field and for those wishing to extend their knowledge to the frontier. It is a trusted and definitive source on panel data, having been edited by Professor Badi Baltagi-widely recognized as one of the foremost econometricians in the area of panel data econometrics. Professor Baltagi has successfully recruited an all-star cast of experts for each of the well-chosen topics in the Handbook.
Der "Schnell" behandelt Techniken zur graphischen Darstellung von Daten oder statistischer Grosse im Rahmen von Datenanalysen. Diese "Datenanalysegraphik" ist ein nutzliches Instrument fur Datenanalytiker, hier wiederum bevorzugt solche in den Sozialwissenschaften."
A properly structured financial model can provide decision makers with a powerful planning tool that helps them identify the consequences of their decisions before they are put into practice. Introduction to Financial Models for Management and Planning, Second Edition enables professionals and students to learn how to develop and use computer-based models for financial planning. This volume provides critical tools for the financial toolbox, then shows how to use them tools to build successful models.
The process of transforming data into actionable knowledge is a complex process that requires the use of powerful machines and advanced analytics technique. Analytics and Knowledge Management examines the role of analytics in knowledge management and the integration of big data theories, methods, and techniques into an organizational knowledge management framework. Its chapters written by researchers and professionals provide insight into theories, models, techniques, and applications with case studies examining the use of analytics in organizations. The process of transforming data into actionable knowledge is a complex process that requires the use of powerful machines and advanced analytics techniques. Analytics, on the other hand, is the examination, interpretation, and discovery of meaningful patterns, trends, and knowledge from data and textual information. It provides the basis for knowledge discovery and completes the cycle in which knowledge management and knowledge utilization happen. Organizations should develop knowledge focuses on data quality, application domain, selecting analytics techniques, and on how to take actions based on patterns and insights derived from analytics. Case studies in the book explore how to perform analytics on social networking and user-based data to develop knowledge. One case explores analyze data from Twitter feeds. Another examines the analysis of data obtained through user feedback. One chapter introduces the definitions and processes of social media analytics from different perspectives as well as focuses on techniques and tools used for social media analytics. Data visualization has a critical role in the advancement of modern data analytics, particularly in the field of business intelligence and analytics. It can guide managers in understanding market trends and customer purchasing patterns over time. The book illustrates various data visualization tools that can support answering different types of business questions to improve profits and customer relationships. This insightful reference concludes with a chapter on the critical issue of cybersecurity. It examines the process of collecting and organizing data as well as reviewing various tools for text analysis and data analytics and discusses dealing with collections of large datasets and a great deal of diverse data types from legacy system to social networks platforms.
Discover how statistical information impacts decisions in today's business world as Anderson/Sweeney/Williams/Camm/Cochran/Fry/Ohlmann's leading ESSENTIALS OF STATISTICS FOR BUSINESS AND ECONOMICS, 9E connects concepts in each chapter to real-world practice. This edition delivers sound statistical methodology, a proven problem-scenario approach and meaningful applications that reflect the latest developments in business and statistics today. More than 350 new and proven real business examples, a wealth of practical cases and meaningful hands-on exercises highlight statistics in action. You gain practice using leading professional statistical software with exercises and appendices that walk you through using JMP (R) Student Edition 14 and Excel (R) 2016. WebAssign's online course management systems is available separately to further strengthen this business statistics approach and helps you maximize your course success.
Develop the analytical skills that are in high demand in businesses today with Camm/Cochran/Fry/Ohlmann's best-selling BUSINESS ANALYTICS, 4E. You master the full range of analytics as you strengthen descriptive, predictive and prescriptive analytic skills. Real examples and memorable visuals illustrate data and results for each topic. Step-by-step instructions guide you through using Microsoft (R) Excel, Tableau, R, and JMP Pro software to perform even advanced analytics concepts. Practical, relevant problems at all levels of difficulty further help you apply what you've learned. This edition assists you in becoming proficient in topics beyond the traditional quantitative concepts, such as data visualization and data mining, which are increasingly important in today's analytical problem solving. MindTap digital learning resources with an interactive eBook, algorithmic practice problems with solutions and Exploring Analytics visualizations strengthen your understanding of key concepts.
The book unifies quantum theory and the general theory of relativity. As an unsolved problem for about 100 years and influencing so many fields, this is probably of some importance to the scientific community. Examples like Higgs field, limit to classical Dirac and Klein-Gordon or Schroedinger cases, quantized Schwarzschild, Kerr, Kerr-Newman objects, and the photon are considered for illustration. An interesting explanation for the asymmetry of matter and antimatter in the early universe was found while quantizing the Schwarzschild metric.
A variety of different social, natural, and technological systems can be described by the same mathematical framework. This holds from the Internet to food webs and to boards of company directors. In all these situations a graph of the elements of the system and their interconnections displays a universal feature. There are only few elements with many connections, and many elements with few connections. This book presents the experimental evidence of these 'scale-free networks' and provides students and researchers with a corpus of theoretical results and algorithms to analyse and understand these features. The content of this book and the exposition makes it a clear textbook for beginners, and a reference book for the experts.
An introduction to how the mathematical tools from quantum field theory can be applied to economics and finance, providing a wide range of quantum mathematical techniques for designing financial instruments. The ideas of Lagrangians, Hamiltonians, state spaces, operators and Feynman path integrals are demonstrated to be the mathematical underpinning of quantum field theory, and which are employed to formulate a comprehensive mathematical theory of asset pricing as well as of interest rates, which are validated by empirical evidence. Numerical algorithms and simulations are applied to the study of asset pricing models as well as of nonlinear interest rates. A range of economic and financial topics are shown to have quantum mechanical formulations, including options, coupon bonds, nonlinear interest rates, risky bonds and the microeconomic action functional. This is an invaluable resource for experts in quantitative finance and in mathematics who have no specialist knowledge of quantum field theory.
Actuaries have access to a wealth of individual data in pension and insurance portfolios, but rarely use its full potential. This book will pave the way, from methods using aggregate counts to modern developments in survival analysis. Based on the fundamental concept of the hazard rate, Part I shows how and why to build statistical models, based on data at the level of the individual persons in a pension scheme or life insurance portfolio. Extensive use is made of the R statistics package. Smooth models, including regression and spline models in one and two dimensions, are covered in depth in Part II. Finally, Part III uses multiple-state models to extend survival models beyond the simple life/death setting, and includes a brief introduction to the modern counting process approach. Practising actuaries will find this book indispensable, and students will find it helpful when preparing for their professional examinations.
Chris Albright's VBA FOR MODELERS, 4E, International Edition is an essential tool for helping students learn to use Visual Basic for Applications (VBA) as a means to automate common spreadsheet tasks, as well as to create sophisticated management science applications. VBA is the programming language for Microsoft (R) Office. VBA FOR MODELERS, 4E, International Edition contains two parts. The first part teaches students the essentials of VBA for Excel. The second part illustrates how a number of management science models can be automated with VBA. From a user's standpoint, these applications hide the details of the management science techniques and instead present a simple user interface for inputs and results.
What do we mean by inequality comparisons? If the rich just get richer and the poor get poorer, the answer might seem easy. But what if the income distribution changes in a complicated way? Can we use mathematical or statistical techniques to simplify the comparison problem in a way that has economic meaning? What does it mean to measure inequality? Is it similar to National Income? Or a price index? Is it enough just to work out the Gini coefficient? Measuring Inequality tackles these questions and examines the underlying principles of inequality measurement and its relation to welfare economics, distributional analysis, and information theory. The book covers modern theoretical developments in inequality analysis, as well as showing how the way we think about inequality today has been shaped by classic contributions in economics and related disciplines. Formal results and detailed literature discussion are provided in two appendices. The principal points are illustrated in the main text, using examples from US and UK data, as well as other data sources, and associated web materials provide hands-on learning. Measuring Inequality is designed to appeal to both undergraduate and post-graduate students, and academic economists. Its emphasis on practical application means that it will also be useful to policy analysts and advisors.
'A manual for the 21st-century citizen... accessible, refreshingly critical, relevant and urgent' - Financial Times 'Fascinating and deeply disturbing' - Yuval Noah Harari, Guardian Books of the Year In this New York Times bestseller, Cathy O'Neil, one of the first champions of algorithmic accountability, sounds an alarm on the mathematical models that pervade modern life -- and threaten to rip apart our social fabric. We live in the age of the algorithm. Increasingly, the decisions that affect our lives - where we go to school, whether we get a loan, how much we pay for insurance - are being made not by humans, but by mathematical models. In theory, this should lead to greater fairness: everyone is judged according to the same rules, and bias is eliminated. And yet, as Cathy O'Neil reveals in this urgent and necessary book, the opposite is true. The models being used today are opaque, unregulated, and incontestable, even when they're wrong. Most troubling, they reinforce discrimination. Tracing the arc of a person's life, O'Neil exposes the black box models that shape our future, both as individuals and as a society. These "weapons of math destruction" score teachers and students, sort CVs, grant or deny loans, evaluate workers, target voters, and monitor our health. O'Neil calls on modellers to take more responsibility for their algorithms and on policy makers to regulate their use. But in the end, it's up to us to become more savvy about the models that govern our lives. This important book empowers us to ask the tough questions, uncover the truth, and demand change.
Die Monographie stellt eine prinzipielle Verallgemeinerung der herkoemmlichen Wahrscheinlichkeitstheorie vor. Diese erlaubt die Anwendung des Begriffs der Wahrscheinlichkeit auch in jenen Fallen, in denen die vorliegende Information nicht ausreicht, um jedes relevante Ereignis durch eine einzelne Zahl zu charakterisieren. Der mathematisch exakte Umgang mit Wahrscheinlichkeitsbewertungen erfordert eine systematische Erweiterung des Kanons der Begriffe und Methoden. Die Grundlagen hierfur werden im vorliegenden Band gelegt. Die Anwendungsmoeglichkeiten von Intervallwahrscheinlichkeit sind betrachtlich umfassender als die des herkoemmlichen Wahrscheinlichkeitsbegriffs, z.B. in den Bereichen Medizin, Technik, Versicherungswesen und kunstliche Intelligenz.
This book provides a comprehensive account of stochastic filtering as a modeling tool in finance and economics. It aims to present this very important tool with a view to making it more popular among researchers in the disciplines of finance and economics. It is not intended to give a complete mathematical treatment of different stochastic filtering approaches, but rather to describe them in simple terms and illustrate their application with real historical data for problems normally encountered in these disciplines. Beyond laying out the steps to be implemented, the steps are demonstrated in the context of different market segments. Although no prior knowledge in this area is required, the reader is expected to have knowledge of probability theory as well as a general mathematical aptitude.Its simple presentation of complex algorithms required to solve modeling problems in increasingly sophisticated financial markets makes this book particularly valuable as a reference for graduate students and researchers interested in the field. Furthermore, it analyses the model estimation results in the context of the market and contrasts these with contemporary research publications. It is also suitable for use as a text for graduate level courses on stochastic modeling.
The use of credit scoring - the quantitative and statistical techniques to assess the credit risks involved in lending to consumers - has been one of the most successful if unsung applications of mathematics in business for the last fifty years. Now with lenders changing their objectives from minimising defaults to maximising profits, the saturation of the consumer credit market allowing borrowers to be more discriminating in their choice of which loans, mortgages and credit cards to use, and the Basel Accord banking regulations raising the profile of credit scoring within banks there are a number of challenges that require new models that use credit scores as inputs and extensions of the ideas in credit scoring. This book reviews the current methodology and measures used in credit scoring and then looks at the models that can be used to address these new challenges. The first chapter describes what a credit score is and how a scorecard is built which gives credit scores and models how the score is used in the lending decision. The second chapter describes the different ways the quality of a scorecard can be measured and points out how some of these measure the discrimination of the score, some the probability prediction of the score, and some the categorical predictions that are made using the score. The remaining three chapters address how to use risk and response scoring to model the new problems in consumer lending. Chapter three looks at models that assist in deciding how to vary the loan terms made to different potential borrowers depending on their individual characteristics. Risk based pricing is the most common approach being introduced. Chapter four describes how one can use Markov chains and survival analysis to model the dynamics of a borrower's repayment and ordering behaviour . These models allow one to make decisions that maximise the profitability of the borrower to the lender and can be considered as part of a customer relationship management strategy. The last chapter looks at how the new banking regulations in the Basel Accord apply to consumer lending. It develops models that show how they will change the operating decisions used in consumer lending and how their need for stress testing requires the development of new models to assess the credit risk of portfolios of consumer loans rather than a models of the credit risks of individual loans.
The rapidly growing field of computational social choice, at the intersection of computer science and economics, deals with the computational aspects of collective decision making. This handbook, written by thirty-six prominent members of the computational social choice community, covers the field comprehensively. Chapters devoted to each of the field's major themes offer detailed introductions. Topics include voting theory (such as the computational complexity of winner determination and manipulation in elections), fair allocation (such as algorithms for dividing divisible and indivisible goods), coalition formation (such as matching and hedonic games), and many more. Graduate students, researchers, and professionals in computer science, economics, mathematics, political science, and philosophy will benefit from this accessible and self-contained book.
How the obsession with quantifying human performance threatens our schools, medical care, businesses, and government Today, organizations of all kinds are ruled by the belief that the path to success is quantifying human performance, publicizing the results, and dividing up the rewards based on the numbers. But in our zeal to instill the evaluation process with scientific rigor, we've gone from measuring performance to fixating on measuring itself. The result is a tyranny of metrics that threatens the quality of our lives and most important institutions. In this timely and powerful book, Jerry Muller uncovers the damage our obsession with metrics is causing-and shows how we can begin to fix the problem. Filled with examples from education, medicine, business and finance, government, the police and military, and philanthropy and foreign aid, this brief and accessible book explains why the seemingly irresistible pressure to quantify performance distorts and distracts, whether by encouraging "gaming the stats" or "teaching to the test." That's because what can and does get measured is not always worth measuring, may not be what we really want to know, and may draw effort away from the things we care about. Along the way, we learn why paying for measured performance doesn't work, why surgical scorecards may increase deaths, and much more. But metrics can be good when used as a complement to-rather than a replacement for-judgment based on personal experience, and Muller also gives examples of when metrics have been beneficial. Complete with a checklist of when and how to use metrics, The Tyranny of Metrics is an essential corrective to a rarely questioned trend that increasingly affects us all.
This book provides a broad, mature, and systematic introduction to current financial econometric models and their applications to modeling and prediction of financial time series data. It utilizes real-world examples and real financial data throughout the book to apply the models and methods described. The author begins with basic characteristics of financial time series data before covering three main topics: Analysis and application of univariate financial time seriesThe return series of multiple assetsBayesian inference in finance methods Key features of the new edition include additional coverage of modern day topics such as arbitrage, pair trading, realized volatility, and credit risk modeling; a smooth transition from S-Plus to R; and expanded empirical financial data sets. The overall objective of the book is to provide some knowledge of financial time series, introduce some statistical tools useful for analyzing these series and gain experience in financial applications of various econometric methods.
The substantially updated third edition of the popular Actuarial Mathematics for Life Contingent Risks is suitable for advanced undergraduate and graduate students of actuarial science, for trainee actuaries preparing for professional actuarial examinations, and for life insurance practitioners who wish to increase or update their technical knowledge. The authors provide intuitive explanations alongside mathematical theory, equipping readers to understand the material in sufficient depth to apply it in real-world situations and to adapt their results in a changing insurance environment. Topics include modern actuarial paradigms, such as multiple state models, cash-flow projection methods and option theory, all of which are required for managing the increasingly complex range of contemporary long-term insurance products. Numerous exam-style questions allow readers to prepare for traditional professional actuarial exams, and extensive use of Excel ensures that readers are ready for modern, Excel-based exams and for the actuarial work environment. The Solutions Manual (ISBN 9781108747615), available for separate purchase, provides detailed solutions to the text's exercises.
The majority of empirical research in economics ignores the potential benefits of nonparametric methods, while the majority of advances in nonparametric theory ignores the problems faced in applied econometrics. This book helps bridge this gap between applied economists and theoretical nonparametric econometricians. It discusses in depth, and in terms that someone with only one year of graduate econometrics can understand, basic to advanced nonparametric methods. The analysis starts with density estimation and motivates the procedures through methods that should be familiar to the reader. It then moves on to kernel regression, estimation with discrete data, and advanced methods such as estimation with panel data and instrumental variables models. The book pays close attention to the issues that arise with programming, computing speed, and application. In each chapter, the methods discussed are applied to actual data, paying attention to presentation of results and potential pitfalls. |
You may like...
TIMSS 2015 Grade 9 national report…
Linda Zuze, Vijay Reddy, …
Paperback
Assessment in Higher Education…
Patrick L. Courts, Kathleen McInerney
Hardcover
Reflecting on the Common European…
David Little, Neus Figueras
Hardcover
Advances in Educational and…
Ronald K. Hambleton, Jac N. Zaal
Hardcover
R5,400
Discovery Miles 54 000
|