![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > Economic statistics
Explores the Origin of the Recent Banking Crisis and how to Preclude Future Crises Shedding new light on the recent worldwide banking debacle, The Banking Crisis Handbook presents possible remedies as to what should have been done prior, during, and after the crisis. With contributions from well-known academics and professionals, the book contains exclusive, new research that will undoubtedly assist bank executives, risk management departments, and other financial professionals to attain a clear picture of the banking crisis and prevent future banking collapses. The first part of the book explains how the crisis originated. It discusses the role of subprime mortgages, shadow banks, ineffective risk management, poor financial regulations, and hedge funds in causing the collapse of financial systems. The second section examines how the crisis affected the global market as well as individual countries and regions, such as Asia and Greece. In the final part, the book explores short- and long-term solutions, including government intervention, financial regulations, efficient bank default risk approaches, and methods to evaluate credit risk. It also looks at when government intervention in financial markets can be ethically justified.
Economic history is the most quantitative branch of history, reflecting the interests and profiting from the techniques and concepts of economics. This essay, first published in 1977, provides an extensive contribution to quantitative historiography by delivering a critical guide to the sources of the numerical data of the period 1700 to 1850. This title will be of interest to students of history, finance and economics.
A Handbook of Statistical Analyses Using SPSS clearly describes how to conduct a range of univariate and multivariate statistical analyses using the latest version of the Statistical Package for the Social Sciences, SPSS 11. Each chapter addresses a different type of analytical procedure applied to one or more data sets, primarily from the social and behavioral sciences areas. Each chapter also contains exercises relating to the data sets introduced, providing readers with a means to develop both their SPSS and statistical skills. Model answers to the exercises are also provided. Readers can download all of the data sets from a companion Web site furnished by the authors.
This book focuses on the application of the partial hedging approach from modern math finance to equity-linked life insurance contracts. It provides an accessible, up-to-date introduction to quantifying financial and insurance risks. The book also explains how to price innovative financial and insurance products from partial hedging perspectives. Each chapter presents the problem, the mathematical formulation, theoretical results, derivation details, numerical illustrations, and references to further reading.
Thijs ten Raa, author of the acclaimed text The Economics of Input-Output Analysis, now takes the reader to the forefront of the field. This volume collects and unifies his and his co-authors' research papers on national accounting, input-output coefficients, economic theory, dynamic models, stochastic analysis, and performance analysis. The research is driven by the task to analyze national economies. The final part of the book scrutinizes the emerging Asian economies in the light of international competition.
'A statistical national treasure' Jeremy Vine, BBC Radio 2 'Required reading for all politicians, journalists, medics and anyone who tries to influence people (or is influenced) by statistics. A tour de force' Popular Science Do busier hospitals have higher survival rates? How many trees are there on the planet? Why do old men have big ears? David Spiegelhalter reveals the answers to these and many other questions - questions that can only be addressed using statistical science. Statistics has played a leading role in our scientific understanding of the world for centuries, yet we are all familiar with the way statistical claims can be sensationalised, particularly in the media. In the age of big data, as data science becomes established as a discipline, a basic grasp of statistical literacy is more important than ever. In The Art of Statistics, David Spiegelhalter guides the reader through the essential principles we need in order to derive knowledge from data. Drawing on real world problems to introduce conceptual issues, he shows us how statistics can help us determine the luckiest passenger on the Titanic, whether serial killer Harold Shipman could have been caught earlier, and if screening for ovarian cancer is beneficial. 'Shines a light on how we can use the ever-growing deluge of data to improve our understanding of the world' Nature
Introduction to Statistics with SPSS offers an introduction to statistics that can be used before, during or after a course on statistics. Covering a wide range of terms and techniques, including simple and multiple regressions, this book guides the student to enter data from a simple research project into a computer, provide an adequate analysis of the data and present a report on the findings.
This short book introduces the main ideas of statistical inference in a way that is both user friendly and mathematically sound. Particular emphasis is placed on the common foundation of many models used in practice. In addition, the book focuses on the formulation of appropriate statistical models to study problems in business, economics, and the social sciences, as well as on how to interpret the results from statistical analyses. The book will be useful to students who are interested in rigorous applications of statistics to problems in business, economics and the social sciences, as well as students who have studied statistics in the past, but need a more solid grounding in statistical techniques to further their careers. Jacco Thijssen is professor of finance at the University of York, UK. He holds a PhD in mathematical economics from Tilburg University, Netherlands. His main research interests are in applications of optimal stopping theory, stochastic calculus, and game theory to problems in economics and finance. Professor Thijssen has earned several awards for his statistics teaching.
The purpose of this book is to introduce novice researchers to the tools of meta-analysis and meta-regression analysis and to summarize the state of the art for existing practitioners. Meta-regression analysis addresses the rising "Tower of Babel" that current economics and business research has become. Meta-analysis is the statistical analysis of previously published, or reported, research findings on a given hypothesis, empirical effect, phenomenon, or policy intervention. It is a systematic review of all the relevant scientific knowledge on a specific subject and is an essential part of the evidence-based practice movement in medicine, education and the social sciences. However, research in economics and business is often fundamentally different from what is found in the sciences and thereby requires different methods for its synthesis-meta-regression analysis. This book develops, summarizes, and applies these meta-analytic methods.
Prepares readers to analyze data and interpret statistical results using the increasingly popular R more quickly than other texts through LessR extensions which remove the need to program. By introducing R through less R, readers learn how to organize data for analysis, read the data into R, and produce output without performing numerous functions and programming first. Readers can select the necessary procedure and change the relevant variables without programming. Quick Starts introduce readers to the concepts and commands reviewed in the chapters. Margin notes define, illustrate, and cross-reference the key concepts. When readers encounter a term previously discussed, the margin notes identify the page number to the initial introduction. Scenarios highlight the use of a specific analysis followed by the corresponding R/lessR input and an interpretation of the resulting output. Numerous examples of output from psychology, business, education, and other social sciences demonstrate how to interpret results and worked problems help readers test their understanding. www.lessRstats.com website features the lessR program, the book's 2 data sets referenced in standard text and SPSS formats so readers can practice using R/lessR by working through the text examples and worked problems, PDF slides for each chapter, solutions to the book's worked problems, links to R/lessR videos to help readers better understand the program, and more. New to this edition: o upgraded functionality and data visualizations of the lessR package, which is now aesthetically equal to the ggplot 2 R standard o new features to replace and extend previous content, such as aggregating data with pivot tables with a simple lessR function call.
News Professor Cheng-Few Lee ranks #1 based on his publications in the 26 core finance journals, and #163 based on publications in the 7 leading finance journals (Source: Most Prolific Authors in the Finance Literature: 1959-2008 by Jean L Heck and Philip L Cooley (Saint Joseph's University and Trinity University).This is an extensively revised edition of a popular statistics textbook for business and economics students. The first edition has been adopted by universities and colleges worldwide, including New York University, Carnegie Mellon University and UCLA.Designed for upper-level undergraduates, MBA and other graduate students, this book closely integrates various statistical techniques with concepts from business, economics and finance and clearly demonstrates the power of statistical methods in the real world of business. While maintaining the essence of the first edition, the new edition places more emphasis on finance, economics and accounting concepts with updated sample data. Students will find this book very accessible with its straightforward language, ample cases, examples, illustrations and real-life applications. The book is also useful for financial analysts and portfolio managers.
Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of stochastic processes with continuous and discontinuous paths. It also covers a wide selection of popular models in finance and insurance, from Black-Scholes to stochastic volatility to interest rate to dynamic mortality. Through its many numerical and graphical illustrations and simple, insightful examples, this book provides a deep understanding of the scope of Monte Carlo methods and their use in various financial situations. The intuitive presentation encourages readers to implement and further develop the simulation methods.
-Up-to-date with cutting edge topics -Suitable for professional quants and as library reference for students of finance and financial mathematics
Factor Analysis and Dimension Reduction in R provides coverage, with worked examples, of a large number of dimension reduction procedures along with model performance metrics to compare them. Factor analysis in the form of principal components analysis (PCA) or principal factor analysis (PFA) is familiar to most social scientists. However, what is less familiar is understanding that factor analysis is a subset of the more general statistical family of dimension reduction methods. The social scientist's toolkit for factor analysis problems can be expanded to include the range of solutions this book presents. In addition to covering FA and PCA with orthogonal and oblique rotation, this book's coverage includes higher-order factor models, bifactor models, models based on binary and ordinal data, models based on mixed data, generalized low-rank models, cluster analysis with GLRM, models involving supplemental variables or observations, Bayesian factor analysis, regularized factor analysis, testing for unidimensionality, and prediction with factor scores. The second half of the book deals with other procedures for dimension reduction. These include coverage of kernel PCA, factor analysis with multidimensional scaling, locally linear embedding models, Laplacian eigenmaps, diffusion maps, force directed methods, t-distributed stochastic neighbor embedding, independent component analysis (ICA), dimensionality reduction via regression (DRR), non-negative matrix factorization (NNMF), Isomap, Autoencoder, uniform manifold approximation and projection (UMAP) models, neural network models, and longitudinal factor analysis models. In addition, a special chapter covers metrics for comparing model performance. Features of this book include: Numerous worked examples with replicable R code Explicit comprehensive coverage of data assumptions Adaptation of factor methods to binary, ordinal, and categorical data Residual and outlier analysis Visualization of factor results Final chapters that treat integration of factor analysis with neural network and time series methods Presented in color with R code and introduction to R and RStudio, this book will be suitable for graduate-level and optional module courses for social scientists, and on quantitative methods and multivariate statistics courses.
Econophysics applies the methodology of physics to the study of economics. However, whilst physicists have good understanding of statistical physics, they may be unfamiliar with recent advances in statistical conjectures, including Bayesian and predictive methods. Equally, economists with knowledge of probabilities do not have a background in statistical physics and agent-based models. Proposing a unified view for a dynamic probabilistic approach, this book is useful for advanced undergraduate and graduate students as well as researchers in physics, economics and finance. The book takes a finitary approach to the subject, discussing the essentials of applied probability, and covering finite Markov chain theory and its applications to real systems. Each chapter ends with a summary, suggestions for further reading, and exercises with solutions at the end of the book.
The state-space approach provides a formal framework where any result or procedure developed for a basic model can be seamlessly applied to a standard formulation written in state-space form. Moreover, it can accommodate with a reasonable effort nonstandard situations, such as observation errors, aggregation constraints, or missing in-sample values. Exploring the advantages of this approach, State-Space Methods for Time Series Analysis: Theory, Applications and Software presents many computational procedures that can be applied to a previously specified linear model in state-space form. After discussing the formulation of the state-space model, the book illustrates the flexibility of the state-space representation and covers the main state estimation algorithms: filtering and smoothing. It then shows how to compute the Gaussian likelihood for unknown coefficients in the state-space matrices of a given model before introducing subspace methods and their application. It also discusses signal extraction, describes two algorithms to obtain the VARMAX matrices corresponding to any linear state-space model, and addresses several issues relating to the aggregation and disaggregation of time series. The book concludes with a cross-sectional extension to the classical state-space formulation in order to accommodate longitudinal or panel data. Missing data is a common occurrence here, and the book explains imputation procedures necessary to treat missingness in both exogenous and endogenous variables. Web Resource The authors' E4 MATLAB (R) toolbox offers all the computational procedures, administrative and analytical functions, and related materials for time series analysis. This flexible, powerful, and free software tool enables readers to replicate the practical examples in the text and apply the procedures to their own work.
A unique and comprehensive source of information, the International Yearbook of Industrial Statistics is the only international publication providing economists, planners, policy makers and business people with worldwide statistics on current performance and trends in the manufacturing sector.This is the first issue of the annual publication which succeeds the UNIDO's Handbook of Industrial Statistics and, at the same time, replaces the United Nation's Industrial Statistics Yearbook, volume I (General Industrial Statistics). Covering more than 120 countries/areas, the new version contains data which is internationally comparable and much more detailed than that supplied in previous publications. Information has been collected directly from national statistical sources and supplemented with estimates by UNIDO. The Yearbook is designed to facilitate international comparisons relating to manufacturing activity and industrial performance. It provides data which can be used to analyse patterns of growth, structural change and industrial performance in individual industries. Data on employment trends, wages and other key indicators are also presented. Finally, the detailed information presented here enables the user to study different aspects of industry which was not possible using the aggregate data previously available.
Valuable software, realistic examples, clear writing, and fascinating topics help you master key spreadsheet and business analytics skills with SPREADSHEET MODELING AND DECISION ANALYSIS, 8E. You'll find everything you need to become proficient in today's most widely used business analytics techniques using Microsoft (R) Office Excel (R) 2016. Author Cliff Ragsdale -- respected innovator in business analytics -- guides you through the skills you need, using the latest Excel (R) for Windows. You gain the confidence to apply what you learn to real business situations with step-by-step instructions and annotated screen images that make examples easy to follow. The World of Management Science sections further demonstrates how each topic applies to a real company. Each new edition includes extended trial licenses for Analytic Solver Platform and XLMiner with powerful simulation and optimization tools for descriptive and prescriptive analytics and a full suite of tools for data mining in Excel.
Model a Wide Range of Count Time Series Handbook of Discrete-Valued Time Series presents state-of-the-art methods for modeling time series of counts and incorporates frequentist and Bayesian approaches for discrete-valued spatio-temporal data and multivariate data. While the book focuses on time series of counts, some of the techniques discussed can be applied to other types of discrete-valued time series, such as binary-valued or categorical time series. Explore a Balanced Treatment of Frequentist and Bayesian Perspectives Accessible to graduate-level students who have taken an elementary class in statistical time series analysis, the book begins with the history and current methods for modeling and analyzing univariate count series. It next discusses diagnostics and applications before proceeding to binary and categorical time series. The book then provides a guide to modern methods for discrete-valued spatio-temporal data, illustrating how far modern applications have evolved from their roots. The book ends with a focus on multivariate and long-memory count series. Get Guidance from Masters in the Field Written by a cohesive group of distinguished contributors, this handbook provides a unified account of the diverse techniques available for observation- and parameter-driven models. It covers likelihood and approximate likelihood methods, estimating equations, simulation methods, and a Bayesian approach for model fitting.
Ranking of Multivariate Populations: A Permutation Approach with Applications presents a novel permutation-based nonparametric approach for ranking several multivariate populations. Using data collected from both experimental and observation studies, it covers some of the most useful designs widely applied in research and industry investigations, such as multivariate analysis of variance (MANOVA) and multivariate randomized complete block (MRCB) designs. The first section of the book introduces the topic of ranking multivariate populations by presenting the main theoretical ideas and an in-depth literature review. The second section discusses a large number of real case studies from four specific research areas: new product development in industry, perceived quality of the indoor environment, customer satisfaction, and cytological and histological analysis by image processing. A web-based nonparametric combination global ranking software is also described. Designed for practitioners and postgraduate students in statistics and the applied sciences, this application-oriented book offers a practical guide to the reliable global ranking of multivariate items, such as products, processes, and services, in terms of the performance of all investigated products/prototypes.
This book is designed to introduce graduate students and researchers to the primary methods useful for approximating integrals. The emphasis is on those methods that have been found to be of practical use, and although the focus is on approximating higher-dimensional integrals the lower-dimensional case is also covered. This book covers all the most useful approximation techniques so far discovered; the first time that all such techniques have been included in a single book and at a level accessible for students. In particular, it includes a complete development of the material needed to construct the highly popular Markov Chain Monte Carlo (MCMC) methods.
A thrilling behind-the-scenes exploration of how governments past and present have been led astray by bad data - and why it is so hard to measure things and to do it well. Our politicians make vital decisions and declarations every day that rely on official data. But should all statistics be trusted? In BAD DATA, House of Commons Library statistician Georgina Sturge draws back the curtain on how governments of the past and present have been led astray by figures littered with inconsistency, guesswork and uncertainty. Discover how a Hungarian businessman's bright idea caused half a million people to go missing from UK migration statistics. Find out why it's possible for two politicians to disagree over whether poverty has gone up or down, using the same official numbers, and for both to be right at the same time. And hear about how policies like ID cards, super-casinos and stopping ex-convicts from reoffending failed to live up to their promise because they were based on shaky data. With stories that range from the troubling to the empowering to the downright absurd, BAD DATA reveals secrets from the usually closed-off world of policy-making. It also suggests how - once we understand the human story behind the numbers - we can make more informed choices about who to trust, and when.
The series, Contemporary Perspectives on Data Mining, is composed of blind refereed scholarly research methods and applications of data mining. This series will be targeted both at the academic community, as well as the business practitioner. Data mining seeks to discover knowledge from vast amounts of data with the use of statistical and mathematical techniques. The knowledge is extracted form this data by examining the patterns of the data, whether they be associations of groups or things, predictions, sequential relationships between time order events or natural groups. Data mining applications are seen in finance (banking, brokerage, insurance), marketing (customer relationships, retailing, logistics, travel), as well as in manufacturing, health care, fraud detection, home-land security, and law enforcement.
The series, Contemporary Perspectives on Data Mining, is composed of blind refereed scholarly research methods and applications of data mining. This series will be targeted both at the academic community, as well as the business practitioner. Data mining seeks to discover knowledge from vast amounts of data with the use of statistical and mathematical techniques. The knowledge is extracted form this data by examining the patterns of the data, whether they be associations of groups or things, predictions, sequential relationships between time order events or natural groups. Data mining applications are seen in finance (banking, brokerage, insurance), marketing (customer relationships, retailing, logistics, travel), as well as in manufacturing, health care, fraud detection, home-land security, and law enforcement. |
You may like...
Aging Methods and Protocols
Yvonne A. Barnett, Christopher R Barnett
Hardcover
R2,875
Discovery Miles 28 750
Morphodynamic Model for Predicting Beach…
Takaaki Uda, Masumi Serizawa, …
Hardcover
R3,088
Discovery Miles 30 880
The Indian Ocean and its Role in the…
Caroline Ummenhofer, Raleigh R. Hood
Paperback
R3,517
Discovery Miles 35 170
|