![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > Economic statistics
Discover the Benefits of Risk Parity Investing Despite recent progress in the theoretical analysis and practical applications of risk parity, many important fundamental questions still need to be answered. Risk Parity Fundamentals uses fundamental, quantitative, and historical analysis to address these issues, such as: What are the macroeconomic dimensions of risk in risk parity portfolios? What are the appropriate risk premiums in a risk parity portfolio? What are market environments in which risk parity might thrive or struggle? What is the role of leverage in a risk parity portfolio? An experienced researcher and portfolio manager who coined the term "risk parity," the author provides investors with a practical understanding of the risk parity investment approach. Investors will gain insight into the merit of risk parity as well as the practical and underlying aspects of risk parity investing.
This book is intended to provide the reader with a firm conceptual and empirical understanding of basic information-theoretic econometric models and methods. Because most data are observational, practitioners work with indirect noisy observations and ill-posed econometric models in the form of stochastic inverse problems. Consequently, traditional econometric methods in many cases are not applicable for answering many of the quantitative questions that analysts wish to ask. After initial chapters deal with parametric and semiparametric linear probability models, the focus turns to solving nonparametric stochastic inverse problems. In succeeding chapters, a family of power divergence measure likelihood functions are introduced for a range of traditional and nontraditional econometric-model problems. Finally, within either an empirical maximum likelihood or loss context, Ron C. Mittelhammer and George G. Judge suggest a basis for choosing a member of the divergence family.
The process of transforming data into actionable knowledge is a complex process that requires the use of powerful machines and advanced analytics technique. Analytics and Knowledge Management examines the role of analytics in knowledge management and the integration of big data theories, methods, and techniques into an organizational knowledge management framework. Its chapters written by researchers and professionals provide insight into theories, models, techniques, and applications with case studies examining the use of analytics in organizations. The process of transforming data into actionable knowledge is a complex process that requires the use of powerful machines and advanced analytics techniques. Analytics, on the other hand, is the examination, interpretation, and discovery of meaningful patterns, trends, and knowledge from data and textual information. It provides the basis for knowledge discovery and completes the cycle in which knowledge management and knowledge utilization happen. Organizations should develop knowledge focuses on data quality, application domain, selecting analytics techniques, and on how to take actions based on patterns and insights derived from analytics. Case studies in the book explore how to perform analytics on social networking and user-based data to develop knowledge. One case explores analyze data from Twitter feeds. Another examines the analysis of data obtained through user feedback. One chapter introduces the definitions and processes of social media analytics from different perspectives as well as focuses on techniques and tools used for social media analytics. Data visualization has a critical role in the advancement of modern data analytics, particularly in the field of business intelligence and analytics. It can guide managers in understanding market trends and customer purchasing patterns over time. The book illustrates various data visualization tools that can support answering different types of business questions to improve profits and customer relationships. This insightful reference concludes with a chapter on the critical issue of cybersecurity. It examines the process of collecting and organizing data as well as reviewing various tools for text analysis and data analytics and discusses dealing with collections of large datasets and a great deal of diverse data types from legacy system to social networks platforms.
Published in 1932, this is the third edition of an original 1922 volume. The 1922 volume was, in turn, created as the replacement for the Institute of Actuaries Textbook, Part Three, which was the foremost source of knowledge on the subject of life contingencies for over 35 years. Assuming a high level of mathematical knowledge on the part of the reader, it was aimed chiefly at actuarial students and those with a professional interest in the relationship between statistics and mortality. Highly organised and containing numerous mathematical formulae, this book will remain of value to anyone with an interest in risk calculation and the development of the insurance industry.
Logistic models are widely used in economics and other disciplines and are easily available as part of many statistical software packages. This text for graduates, practitioners and researchers in economics, medicine and statistics, which was originally published in 2003, explains the theory underlying logit analysis and gives a thorough explanation of the technique of estimation. The author has provided many empirical applications as illustrations and worked examples. A large data set - drawn from Dutch car ownership statistics - is provided online for readers to practise the techniques they have learned. Several varieties of logit model have been developed independently in various branches of biology, medicine and other disciplines. This book takes its inspiration from logit analysis as it is practised in economics, but it also pays due attention to developments in these other fields.
This book is a collection of essays written in honor of Professor Peter C. B. Phillips of Yale University by some of his former students. The essays analyze a number of important issues in econometrics, all of which Professor Phillips has directly influenced through his seminal scholarly contribution as well as through his remarkable achievements as a teacher. The essays are organized to cover topics in higher-order asymptotics, deficient instruments, nonstationary, LAD and quantile regression, and nonstationary panels. These topics span both theoretical and applied approaches and are intended for use by professionals and advanced graduate students.
Econophysics applies the methodology of physics to the study of economics. However, whilst physicists have good understanding of statistical physics, they may be unfamiliar with recent advances in statistical conjectures, including Bayesian and predictive methods. Equally, economists with knowledge of probabilities do not have a background in statistical physics and agent-based models. Proposing a unified view for a dynamic probabilistic approach, this book is useful for advanced undergraduate and graduate students as well as researchers in physics, economics and finance. The book takes a finitary approach to the subject, discussing the essentials of applied probability, and covering finite Markov chain theory and its applications to real systems. Each chapter ends with a summary, suggestions for further reading, and exercises with solutions at the end of the book.
This 2005 volume contains the papers presented in honor of the lifelong achievements of Thomas J. Rothenberg on the occasion of his retirement. The authors of the chapters include many of the leading econometricians of our day, and the chapters address topics of current research significance in econometric theory. The chapters cover four themes: identification and efficient estimation in econometrics, asymptotic approximations to the distributions of econometric estimators and tests, inference involving potentially nonstationary time series, such as processes that might have a unit autoregressive root, and nonparametric and semiparametric inference. Several of the chapters provide overviews and treatments of basic conceptual issues, while others advance our understanding of the properties of existing econometric procedures and/or propose others. Specific topics include identification in nonlinear models, inference with weak instruments, tests for nonstationary in time series and panel data, generalized empirical likelihood estimation, and the bootstrap.
Originally published in 2005, Weather Derivative Valuation covers all the meteorological, statistical, financial and mathematical issues that arise in the pricing and risk management of weather derivatives. There are chapters on meteorological data and data cleaning, the modelling and pricing of single weather derivatives, the modelling and valuation of portfolios, the use of weather and seasonal forecasts in the pricing of weather derivatives, arbitrage pricing for weather derivatives, risk management, and the modelling of temperature, wind and precipitation. Specific issues covered in detail include the analysis of uncertainty in weather derivative pricing, time-series modelling of daily temperatures, the creation and use of probabilistic meteorological forecasts and the derivation of the weather derivative version of the Black-Scholes equation of mathematical finance. Written by consultants who work within the weather derivative industry, this book is packed with practical information and theoretical insight into the world of weather derivative pricing.
This is the first textbook designed to teach statistics to students in aviation courses. All examples and exercises are grounded in an aviation context, including flight instruction, air traffic control, airport management, and human factors. Structured in six parts, theiscovers the key foundational topics relative to descriptive and inferential statistics, including hypothesis testing, confidence intervals, z and t tests, correlation, regression, ANOVA, and chi-square. In addition, this book promotes both procedural knowledge and conceptual understanding. Detailed, guided examples are presented from the perspective of conducting a research study. Each analysis technique is clearly explained, enabling readers to understand, carry out, and report results correctly. Students are further supported by a range of pedagogical features in each chapter, including objectives, a summary, and a vocabulary check. Digital supplements comprise downloadable data sets and short video lectures explaining key concepts. Instructors also have access to PPT slides and an instructor’s manual that consists of a test bank with multiple choice exams, exercises with data sets, and solutions. This is the ideal statistics textbook for aviation courses globally, especially in aviation statistics, research methods in aviation, human factors, and related areas.
How could Finance benefit from AI? How can AI techniques provide an edge? Moving well beyond simply speeding up computation, this book tackles AI for Finance from a range of perspectives including business, technology, research, and students. Covering aspects like algorithms, big data, and machine learning, this book answers these and many other questions.
This third edition of Braun and Murdoch's bestselling textbook now includes discussion of the use and design principles of the tidyverse packages in R, including expanded coverage of ggplot2, and R Markdown. The expanded simulation chapter introduces the Box-Muller and Metropolis-Hastings algorithms. New examples and exercises have been added throughout. This is the only introduction you'll need to start programming in R, the computing standard for analyzing data. This book comes with real R code that teaches the standards of the language. Unlike other introductory books on the R system, this book emphasizes portable programming skills that apply to most computing languages and techniques used to develop more complex projects. Solutions, datasets, and any errata are available from www.statprogr.science. Worked examples - from real applications - hundreds of exercises, and downloadable code, datasets, and solutions make a complete package for anyone working in or learning practical data science.
The idea that simplicity matters in science is as old as science itself, with the much cited example of Ockham's Razor, 'entia non sunt multiplicanda praeter necessitatem': entities are not to be multiplied beyond necessity. A problem with Ockham's razor is that nearly everybody seems to accept it, but few are able to define its exact meaning and to make it operational in a non-arbitrary way. Using a multidisciplinary perspective including philosophers, mathematicians, econometricians and economists, this 2002 monograph examines simplicity by asking six questions: what is meant by simplicity? How is simplicity measured? Is there an optimum trade-off between simplicity and goodness-of-fit? What is the relation between simplicity and empirical modelling? What is the relation between simplicity and prediction? What is the connection between simplicity and convenience? The book concludes with reflections on simplicity by Nobel Laureates in Economics.
Accessible to a general audience with some background in statistics and computing Many examples and extended case studies Illustrations using R and Rstudio A true blend of statistics and computer science -- not just a grab bag of topics from each
This book describes the classical axiomatic theories of decision under uncertainty, as well as critiques thereof and alternative theories. It focuses on the meaning of probability, discussing some definitions and surveying their scope of applicability. The behavioral definition of subjective probability serves as a way to present the classical theories, culminating in Savage's theorem. The limitations of this result as a definition of probability lead to two directions - first, similar behavioral definitions of more general theories, such as non-additive probabilities and multiple priors, and second, cognitive derivations based on case-based techniques.
Price and quantity indices are important, much-used measuring instruments, and it is therefore necessary to have a good understanding of their properties. When it was published, this book is the first comprehensive text on index number theory since Irving Fisher's 1922 The Making of Index Numbers. The book covers intertemporal and interspatial comparisons; ratio- and difference-type measures; discrete and continuous time environments; and upper- and lower-level indices. Guided by economic insights, this book develops the instrumental or axiomatic approach. There is no role for behavioural assumptions. In addition to subject matter chapters, two entire chapters are devoted to the rich history of the subject.
Do economics and statistics succeed in explaining human social behaviour? To answer this question. Leland Gerson Neuberg studies some pioneering controlled social experiments. Starting in the late 1960s, economists and statisticians sought to improve social policy formation with random assignment experiments such as those that provided income guarantees in the form of a negative income tax. This book explores anomalies in the conceptual basis of such experiments and in the foundations of statistics and economics more generally. Scientific inquiry always faces certain philosophical problems. Controlled experiments of human social behaviour, however, cannot avoid some methodological difficulties not evident in physical science experiments. Drawing upon several examples, the author argues that methodological anomalies prevent microeconomics and statistics from explaining human social behaviour as coherently as the physical sciences explain nature. He concludes that controlled social experiments are a frequently overrated tool for social policy improvement.
"It's the economy, stupid," as Democratic strategist James Carville
would say. After many years of study, Ray C. Fair has found that
the state of the economy has a dominant influence on national
elections. Just in time for the 2012 presidential election, this
new edition of his classic text, "Predicting Presidential Elections
and Other Things," provides us with a look into the likely future
of our nation's political landscape--but Fair doesn't stop there.
Meta-Regression Analysis in Economics and Business is the first text devoted to the meta-regression analysis (MRA) of economics and business research. The book provides a comprehensive guide to conducting systematic reviews of empirical economics and business research, identifying and explaining the best practices of MRA, and highlighting its problems and pitfalls. These statistical techniques are illustrated using actual data from four published meta-analyses of business and economic research: the effects of unions on productivity, the employment effects of the minimum wage, the value of a statistical life and residential water demand elasticities. While it shares some features in common with these other disciplines, meta-analysis in economics and business faces its own particular challenges and types of research data. This volume guides new researchers from the beginning to the end, from the collection of research to publication of their research. This book will be of great interest to students and researchers in business, economics, marketing, management, and political science, as well as to policy makers.
Social media has made charts, infographics and diagrams ubiquitous-and easier to share than ever. While such visualisations can better inform us, they can also deceive by displaying incomplete or inaccurate data, suggesting misleading patterns-or misinform by being poorly designed. Many of us are ill equipped to interpret the visuals that politicians, journalists, advertisers and even employers present each day, enabling bad actors to easily manipulate visuals to promote their own agendas. Public conversations are increasingly driven by numbers and to make sense of them, we must be able to decode and use visual information. By examining contemporary examples ranging from election-result infographics to global GDP maps and box-office record charts, How Charts Lie teaches us how to do just that.
In the future, as our society becomes older and older, an increasing number of people will be confronted with Alzheimer's disease. Some will suffer from the illness themselves, others will see parents, relatives, their spouse or a close friend afflicted by it. Even now, the psychological and financial burden caused by Alzheimer's disease is substantial, most of it borne by the patient and her family. Improving the situation for the patients and their caregivers presents a challenge for societies and decision makers. Our work contributes to improving the in decision making situation con cerning Alzheimer's disease. At a fundamental level, it addresses methodo logical aspects of the contingent valuation method and gives a holistic view of applying the contingent valuation method for use in policy. We show all stages of a contingent valuation study beginning with the design, the choice of elicitation techniques and estimation methods for willingness-to-pay, the use of the results in a cost-benefit analysis, and finally, the policy implica tions resulting from our findings. We do this by evaluating three possible programs dealing with Alzheimer's disease. The intended audience of this book are health economists interested in methodological problems of contin gent valuation studies, people involved in health care decision making, plan ning, and priority setting, as well as people interested in Alzheimer's disease. We would like to thank the many people and institutions who have pro vided their help with this project."
This book contains an accessible discussion examining computationally-intensive techniques and bootstrap methods, providing ways to improve the finite-sample performance of well-known asymptotic tests for regression models. This book uses the linear regression model as a framework for introducing simulation-based tests to help perform econometric analyses.
Learn by doing with this user-friendly introduction to time series data analysis in R. This book explores the intricacies of managing and cleaning time series data of different sizes, scales and granularity, data preparation for analysis and visualization, and different approaches to classical and machine learning time series modeling and forecasting. A range of pedagogical features support students, including end-of-chapter exercises, problems, quizzes and case studies. The case studies are designed to stretch the learner, introducing larger data sets, enhanced data management skills, and R packages and functions appropriate for real-world data analysis. On top of providing commented R programs and data sets, the book's companion website offers extra case studies, lecture slides, videos and exercise solutions. Accessible to those with a basic background in statistics and probability, this is an ideal hands-on text for undergraduate and graduate students, as well as researchers in data-rich disciplines
Meaningful use of advanced Bayesian methods requires a good understanding of the fundamentals. This engaging book explains the ideas that underpin the construction and analysis of Bayesian models, with particular focus on computational methods and schemes. The unique features of the text are the extensive discussion of available software packages combined with a brief but complete and mathematically rigorous introduction to Bayesian inference. The text introduces Monte Carlo methods, Markov chain Monte Carlo methods, and Bayesian software, with additional material on model validation and comparison, transdimensional MCMC, and conditionally Gaussian models. The inclusion of problems makes the book suitable as a textbook for a first graduate-level course in Bayesian computation with a focus on Monte Carlo methods. The extensive discussion of Bayesian software - R/R-INLA, OpenBUGS, JAGS, STAN, and BayesX - makes it useful also for researchers and graduate students from beyond statistics.
"Family Spending" provides analysis of household expenditure broken down by age and income, household composition, socio-economic characteristics and geography. This report will be of interest to academics, policy makers, government and the general public. |
You may like...
Strategic Performance Management and…
Ibrahim Osman, Abdel Latef Anouze, …
Hardcover
R8,730
Discovery Miles 87 300
Contemporary Perspectives in Data Mining…
Kenneth D. Lawrence, Ronald K. Klimberg
Hardcover
R2,544
Discovery Miles 25 440
Operations and Supply Chain Management
James Evans, David Collier
Hardcover
Quantitative statistical techniques
Swanepoel Swanepoel, Vivier Vivier, …
Paperback
(2)R718 Discovery Miles 7 180
|