![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > Economic statistics
Learn by doing with this user-friendly introduction to time series data analysis in R. This book explores the intricacies of managing and cleaning time series data of different sizes, scales and granularity, data preparation for analysis and visualization, and different approaches to classical and machine learning time series modeling and forecasting. A range of pedagogical features support students, including end-of-chapter exercises, problems, quizzes and case studies. The case studies are designed to stretch the learner, introducing larger data sets, enhanced data management skills, and R packages and functions appropriate for real-world data analysis. On top of providing commented R programs and data sets, the book's companion website offers extra case studies, lecture slides, videos and exercise solutions. Accessible to those with a basic background in statistics and probability, this is an ideal hands-on text for undergraduate and graduate students, as well as researchers in data-rich disciplines
Meaningful use of advanced Bayesian methods requires a good understanding of the fundamentals. This engaging book explains the ideas that underpin the construction and analysis of Bayesian models, with particular focus on computational methods and schemes. The unique features of the text are the extensive discussion of available software packages combined with a brief but complete and mathematically rigorous introduction to Bayesian inference. The text introduces Monte Carlo methods, Markov chain Monte Carlo methods, and Bayesian software, with additional material on model validation and comparison, transdimensional MCMC, and conditionally Gaussian models. The inclusion of problems makes the book suitable as a textbook for a first graduate-level course in Bayesian computation with a focus on Monte Carlo methods. The extensive discussion of Bayesian software - R/R-INLA, OpenBUGS, JAGS, STAN, and BayesX - makes it useful also for researchers and graduate students from beyond statistics.
Mathematical models in the social sciences have become increasingly sophisticated and widespread in the last decade. This period has also seen many critiques, most lamenting the sacrifices incurred in pursuit of mathematical rigor. If, as critics argue, our ability to understand the world has not improved during the mathematization of the social sciences, we might want to adopt a different paradigm. This book examines the three main fields of mathematical modeling - game theory, statistics, and computational methods - and proposes a new framework for modeling. Unlike previous treatments which view each field separately, the treatment provides a framework that spans and incorporates the different methodological approaches. The goal is to arrive at a new vision of modeling that allows researchers to solve more complex problems in the social sciences. Additionally, a special emphasis is placed upon the role of computational modeling in the social sciences.
Mathematical models in the social sciences have become increasingly sophisticated and widespread in the last decade. This period has also seen many critiques, most lamenting the sacrifices incurred in pursuit of mathematical rigor. If, as critics argue, our ability to understand the world has not improved during the mathematization of the social sciences, we might want to adopt a different paradigm. This book examines the three main fields of mathematical modeling - game theory, statistics, and computational methods - and proposes a new framework for modeling. Unlike previous treatments which view each field separately, the treatment provides a framework that spans and incorporates the different methodological approaches. The goal is to arrive at a new vision of modeling that allows researchers to solve more complex problems in the social sciences. Additionally, a special emphasis is placed upon the role of computational modeling in the social sciences.
This book is intended for use in a rigorous introductory PhD level course in econometrics, or in a field course in econometric theory. It covers the measure-theoretical foundation of probability theory, the multivariate normal distribution with its application to classical linear regression analysis, various laws of large numbers, central limit theorems and related results for independent random variables as well as for stationary time series, with applications to asymptotic inference of M-estimators, and maximum likelihood theory. Some chapters have their own appendices containing the more advanced topics and/or difficult proofs. Moreover, there are three appendices with material that is supposed to be known. Appendix I contains a comprehensive review of linear algebra, including all the proofs. Appendix II reviews a variety of mathematical topics and concepts that are used throughout the main text, and Appendix III reviews complex analysis. Therefore, this book is uniquely self-contained.
"Family Spending" provides analysis of household expenditure broken down by age and income, household composition, socio-economic characteristics and geography. This report will be of interest to academics, policy makers, government and the general public.
This book is based on two Sir Richard Stone lectures at the Bank of England and the National Institute for Economic and Social Research. Largely non-technical, the first part of the book covers some of the broader issues involved in Stone's and others' work in statistics. It explores the more philosophical issues attached to statistics, econometrics and forecasting and describes the paradigm shift back to the Bayesian approach to scientific inference. The first part concludes with simple examples from the different worlds of educational management and golf clubs. The second, more technical part covers in detail the structural econometric time series analysis (SEMTSA) approach to statistical and econometric modeling.
NOW WITH NEW PROLOGUE ABOUT DEMYSTIFYING CORONAVIRUS NUMBERS, DONALD TRUMP AND WHY STATISTICS MATTER MORE THAN EVER 'The Number Bias combines vivid storytelling with authoritative analysis to deliver a warning about the way numbers can lead us astray - if we let them.' TIM HARFORD Even if you don't consider yourself a numbers person, you are a numbers person. The time has come to put numbers in their place. Not high up on a pedestal, or out on the curb, but right where they belong: beside words. It is not an overstatement to say that numbers dictate the way we live our lives. They tell us how we're doing at school, how much we weigh, who might win an election and whether the economy is booming. But numbers aren't as objective as they may seem; behind every number is a story. Yet politicians, businesses and the media often forget this - or use it for their own gain. Sanne Blauw travels the world to unpick our relationship with numbers and demystify our misguided allegiance, from Florence Nightingale using statistics to petition for better conditions during the Crimean War to the manipulation of numbers by the American tobacco industry and the ambiguous figures peddled during the EU referendum. Taking us from the everyday numbers that govern our health and wellbeing to the statistics used to wield enormous power and influence, The Number Bias counsels us to think more wisely. 'A beautifully accessible exploration of how numbers shape our lives, and the importance of accurately interpreting the statistics we are fed.' ANGELA SAINI, author of Superior
Originating in economics but now used in a variety of disciplines, including medicine, epidemiology and the social sciences, this book provides accessible coverage of the theoretical foundations of the Logit model as well as its applications to concrete problems. It is written not only for economists but for researchers working in disciplines where it is necessary to model qualitative random variables. J.S. Cramer has also provided data sets on which to practice Logit analysis.
Social media has made charts, infographics and diagrams ubiquitous-and easier to share than ever. While such visualisations can better inform us, they can also deceive by displaying incomplete or inaccurate data, suggesting misleading patterns-or misinform by being poorly designed. Many of us are ill equipped to interpret the visuals that politicians, journalists, advertisers and even employers present each day, enabling bad actors to easily manipulate visuals to promote their own agendas. Public conversations are increasingly driven by numbers and to make sense of them, we must be able to decode and use visual information. By examining contemporary examples ranging from election-result infographics to global GDP maps and box-office record charts, How Charts Lie teaches us how to do just that.
"The level is appropriate for an upper-level undergraduate or graduate-level statistics major. Sampling: Design and Analysis (SDA) will also benefit a non-statistics major with a desire to understand the concepts of sampling from a finite population. A student with patience to delve into the rigor of survey statistics will gain even more from the content that SDA offers. The updates to SDA have potential to enrich traditional survey sampling classes at both the undergraduate and graduate levels. The new discussions of low response rates, non-probability surveys, and internet as a data collection mode hold particular value, as these statistical issues have become increasingly important in survey practice in recent years… I would eagerly adopt the new edition of SDA as the required textbook." (Emily Berg, Iowa State University)
The idea that simplicity matters in science is as old as science itself, with the much cited example of Ockham's Razor. A problem with Ockham's Razor is that nearly everybody seems to accept it, but few are able to define its exact meaning and to make it operational in a non-arbitrary way. Using a multidisciplinary perspective including philosophers, mathematicians, econometricians and economists, this monograph examines simplicity by asking six questions: What is meant by simplicity? How is simplicity measured? Is there an optimum trade-off between simplicity and goodness-of-fit? What is the relation between simplicity and empirical modelling? What is the relation between simplicity and prediction? What is the connection between simplicity and convenience?
Economic and financial time series feature important seasonal fluctuations. Despite their regular and predictable patterns over the year, month or week, they pose many challenges to economists and econometricians. This book provides a thorough review of the recent developments in the econometric analysis of seasonal time series. It is designed for an audience of specialists in economic time series analysis and advanced graduate students. It is the most comprehensive and balanced treatment of the subject since the mid-1980s.
Experimental methods in economics respond to circumstances that are
not completely dictated by accepted theory or outstanding problems.
While the field of economics makes sharp distinctions and produces
precise theory, the work of experimental economics sometimes appear
blurred and may produce results that vary from strong support to
little or partial support of the relevant theory.
The advent of "Big Data" has brought with it a rapid diversification of data sources, requiring analysis that accounts for the fact that these data have often been generated and recorded for different reasons. Data integration involves combining data residing in different sources to enable statistical inference, or to generate new statistical data for purposes that cannot be served by each source on its own. This can yield significant gains for scientific as well as commercial investigations. However, valid analysis of such data should allow for the additional uncertainty due to entity ambiguity, whenever it is not possible to state with certainty that the integrated source is the target population of interest. Analysis of Integrated Data aims to provide a solid theoretical basis for this statistical analysis in three generic settings of entity ambiguity: statistical analysis of linked datasets that may contain linkage errors; datasets created by a data fusion process, where joint statistical information is simulated using the information in marginal data from non-overlapping sources; and estimation of target population size when target units are either partially or erroneously covered in each source. Covers a range of topics under an overarching perspective of data integration. Focuses on statistical uncertainty and inference issues arising from entity ambiguity. Features state of the art methods for analysis of integrated data. Identifies the important themes that will define future research and teaching in the statistical analysis of integrated data. Analysis of Integrated Data is aimed primarily at researchers and methodologists interested in statistical methods for data from multiple sources, with a focus on data analysts in the social sciences, and in the public and private sectors.
'A statistical national treasure' Jeremy Vine, BBC Radio 2 'Required reading for all politicians, journalists, medics and anyone who tries to influence people (or is influenced) by statistics. A tour de force' Popular Science Do busier hospitals have higher survival rates? How many trees are there on the planet? Why do old men have big ears? David Spiegelhalter reveals the answers to these and many other questions - questions that can only be addressed using statistical science. Statistics has played a leading role in our scientific understanding of the world for centuries, yet we are all familiar with the way statistical claims can be sensationalised, particularly in the media. In the age of big data, as data science becomes established as a discipline, a basic grasp of statistical literacy is more important than ever. In The Art of Statistics, David Spiegelhalter guides the reader through the essential principles we need in order to derive knowledge from data. Drawing on real world problems to introduce conceptual issues, he shows us how statistics can help us determine the luckiest passenger on the Titanic, whether serial killer Harold Shipman could have been caught earlier, and if screening for ovarian cancer is beneficial. 'Shines a light on how we can use the ever-growing deluge of data to improve our understanding of the world' Nature
Introduction to Financial Mathematics: Option Valuation, Second Edition is a well-rounded primer to the mathematics and models used in the valuation of financial derivatives. The book consists of fifteen chapters, the first ten of which develop option valuation techniques in discrete time, the last five describing the theory in continuous time. The first half of the textbook develops basic finance and probability. The author then treats the binomial model as the primary example of discrete-time option valuation. The final part of the textbook examines the Black-Scholes model. The book is written to provide a straightforward account of the principles of option pricing and examines these principles in detail using standard discrete and stochastic calculus models. Additionally, the second edition has new exercises and examples, and includes many tables and graphs generated by over 30 MS Excel VBA modules available on the author's webpage https://home.gwu.edu/~hdj/.
This substantial volume has two principal objectives. First it provides an overview of the statistical foundations of Simulation-based inference. This includes the summary and synthesis of the many concepts and results extant in the theoretical literature, the different classes of problems and estimators, the asymptotic properties of these estimators, as well as descriptions of the different simulators in use. Second, the volume provides empirical and operational examples of SBI methods. Often what is missing, even in existing applied papers, are operational issues. Which simulator works best for which problem and why? This volume will explicitly address the important numerical and computational issues in SBI which are not covered comprehensively in the existing literature. Examples of such issues are: comparisons with existing tractable methods, number of replications needed for robust results, choice of instruments, simulation noise and bias as well as efficiency loss in practice.
If you are a manager who receives the results of any data analyst's work to help with your decision-making, this book is for you. Anyone playing a role in the field of analytics can benefit from this book as well. In the two decades the editors of this book spent teaching and consulting in the field of analytics, they noticed a critical shortcoming in the communication abilities of many analytics professionals. Specifically, analysts have difficulty in articulating in business terms what their analyses showed and what actionable recommendations were made. When analysts made presentations, they tended to lapse into the technicalities of mathematical procedures, rather than focusing on the strategic and tactical impact and meaning of their work. As analytics has become more mainstream and widespread in organizations, this problem has grown more acute. Data Analytics: Effective Methods for Presenting Results tackles this issue. The editors have used their experience as presenters and audience members who have become lost during presentation. Over the years, they experimented with different ways of presenting analytics work to make a more compelling case to top managers. They have discovered tried and true methods for improving presentations, which they share. The book also presents insights from other analysts and managers who share their own experiences. It is truly a collection of experiences and insight from academics and professionals involved with analytics. The book is not a primer on how to draw the most beautiful charts and graphs or about how to perform any specific kind of analysis. Rather, it shares the experiences of professionals in various industries about how they present their analytics results effectively. They tell their stories on how to win over audiences. The book spans multiple functional areas within a business, and in some cases, it discusses how to adapt presentations to the needs of audiences at different levels of management.
This compendium contains and explains essential statistical formulas within an economic context. A broad range of aids and supportive examples will help readers to understand the formulas and their practical applications. This statistical formulary is presented in a practice-oriented, clear, and understandable manner, as it is needed for meaningful and relevant application in global business, as well as in the academic setting and economic practice. The topics presented include, but are not limited to: statistical signs and symbols, descriptive statistics, empirical distributions, ratios and index figures, correlation analysis, regression analysis, inferential statistics, probability calculation, probability distributions, theoretical distributions, statistical estimation methods, confidence intervals, statistical testing methods, the Peren-Clement index, and the usual statistical tables. Given its scope, the book offers an indispensable reference guide and is a must-read for undergraduate and graduate students, as well as managers, scholars, and lecturers in business, politics, and economics.
The process of transforming data into actionable knowledge is a complex process that requires the use of powerful machines and advanced analytics technique. Analytics and Knowledge Management examines the role of analytics in knowledge management and the integration of big data theories, methods, and techniques into an organizational knowledge management framework. Its chapters written by researchers and professionals provide insight into theories, models, techniques, and applications with case studies examining the use of analytics in organizations. The process of transforming data into actionable knowledge is a complex process that requires the use of powerful machines and advanced analytics techniques. Analytics, on the other hand, is the examination, interpretation, and discovery of meaningful patterns, trends, and knowledge from data and textual information. It provides the basis for knowledge discovery and completes the cycle in which knowledge management and knowledge utilization happen. Organizations should develop knowledge focuses on data quality, application domain, selecting analytics techniques, and on how to take actions based on patterns and insights derived from analytics. Case studies in the book explore how to perform analytics on social networking and user-based data to develop knowledge. One case explores analyze data from Twitter feeds. Another examines the analysis of data obtained through user feedback. One chapter introduces the definitions and processes of social media analytics from different perspectives as well as focuses on techniques and tools used for social media analytics. Data visualization has a critical role in the advancement of modern data analytics, particularly in the field of business intelligence and analytics. It can guide managers in understanding market trends and customer purchasing patterns over time. The book illustrates various data visualization tools that can support answering different types of business questions to improve profits and customer relationships. This insightful reference concludes with a chapter on the critical issue of cybersecurity. It examines the process of collecting and organizing data as well as reviewing various tools for text analysis and data analytics and discusses dealing with collections of large datasets and a great deal of diverse data types from legacy system to social networks platforms.
Microeconometrics Using Stata, Second Edition is an invaluable reference for researchers and students interested in applied microeconometric methods. Like previous editions, this text covers all the classic microeconometric techniques ranging from linear models to instrumental-variables regression to panel-data estimation to nonlinear models such as probit, tobit, Poisson, and choice models. Each of these discussions has been updated to show the most modern implementation in Stata, and many include additional explanation of the underlying methods. In addition, the authors introduce readers to performing simulations in Stata and then use simulations to illustrate methods in other parts of the book. They even teach you how to code your own estimators in Stata. The second edition is greatly expanded—the new material is so extensive that the text now comprises two volumes. In addition to the classics, the book now teaches recently developed econometric methods and the methods newly added to Stata. Specifically, the book includes entirely new chapters on duration models randomized control trials and exogenous treatment effects endogenous treatment effects models for endogeneity and heterogeneity, including finite mixture models, structural equation models, and nonlinear mixed-effects models spatial autoregressive models semiparametric regression lasso for prediction and inference Bayesian analysis Anyone interested in learning classic and modern econometric methods will find this the perfect companion. And those who apply these methods to their own data will return to this reference over and over as they need to implement the various techniques described in this book.
For one-semester courses in Introduction to Business Statistics. The gold standard in learning Microsoft Excelfor business statistics Statistics for Managers Using Microsoft (R) Excel (R), 9th Edition, Global Edition helps students develop the knowledge of Excel needed in future careers. The authors present statistics in the context of specific business fields, and now include a full chapter on business analytics. Guided by principles set forth by ASA's Guidelines for Assessment and Instruction (GAISE) reports and the authors' diverse teaching experiences, the text continues to innovate and improve the way this course is taught to students. Current data throughout gives students valuable practice analysing the types of data they will see in their professions, and the authors' friendly writing style includes tips and learning aids throughout.
This book includes many of the papers presented at the 6th International workshop on Model Oriented Data Analysis held in June 2001. This series began in March 1987 with a meeting on the Wartburg near Eisenach (at that time in the GDR). The next four meetings were in 1990 (St Kyrik monastery, Bulgaria), 1992 (Petrodvorets, St Petersburg, Russia), 1995 (Spetses, Greece) and 1998 (Marseilles, France). Initially the main purpose of these workshops was to bring together leading scientists from 'Eastern' and 'Western' Europe for the exchange of ideas in theoretical and applied statistics, with special emphasis on experimental design. Now that the sep aration between East and West is much less rigid, this exchange has, in principle, become much easier. However, it is still important to provide opportunities for this interaction. MODA meetings are celebrated for their friendly atmosphere. Indeed, dis cussions between young and senior scientists at these meetings have resulted in several fruitful long-term collaborations. This intellectually stimulating atmosphere is achieved by limiting the number of participants to around eighty, by the choice of a location in which communal living is encour aged and, of course, through the careful scientific direction provided by the Programme Committee. It is a tradition of these meetings to provide low cost accommodation, low fees and financial support for the travel of young and Eastern participants. This is only possible through the help of sponsors and outside financial support was again important for the success of the meeting."
Given the huge amount of information in the internet and in practically every domain of knowledge that we are facing today, knowledge discovery calls for automation. The book deals with methods from classification and data analysis that respond effectively to this rapidly growing challenge. The interested reader will find new methodological insights as well as applications in economics, management science, finance, and marketing, and in pattern recognition, biology, health, and archaeology. |
You may like...
The Invisible Art of Literary Editing
Bryan Furuness, Sarah Layden
Hardcover
R1,712
Discovery Miles 17 120
Short-Form Creative Writing - A Writer's…
H K Hummel, Stephanie Lenox
Hardcover
R3,034
Discovery Miles 30 340
SceneWriting - The Missing Manual for…
Chris Perry, Eric Henry Sanders
Hardcover
R2,368
Discovery Miles 23 680
How to Write About Music - Excerpts from…
Marc Woodworth, Ally Jane Grossan
Hardcover
R3,360
Discovery Miles 33 600
|