![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > Economic statistics
This handbook presents a systematic overview of approaches to, diversity, and problems involved in interdisciplinary rating methodologies. Historically, the purpose of ratings is to achieve information transparency regarding a given body's activities, whether in the field of finance, banking, or sports for example. This book focuses on commonly used rating methods in three important fields: finance, sports, and the social sector. In the world of finance, investment decisions are largely shaped by how positively or negatively economies or financial instruments are rated. Ratings have thus become a basis of trust for investors. Similarly, sports evaluation and funding are largely based on core ratings. From local communities to groups of nations, public investment and funding are also dependent on how these bodies are continuously rated against expected performance targets. As such, ratings need to reflect the consensus of all stakeholders on selected aspects of the work and how to evaluate their success. The public should also have the opportunity to participate in this process. The authors examine current rating approaches from a variety of proposals that are closest to the public consensus, analyzing the rating models and summarizing the methods of their construction. This handbook offers a valuable reference guide for managers, analysts, economists, business informatics specialists, and researchers alike.
This textbook introduces readers to practical statistical issues by presenting them within the context of real-life economics and business situations. It presents the subject in a non-threatening manner, with an emphasis on concise, easily understandable explanations. It has been designed to be accessible and student-friendly and, as an added learning feature, provides all the relevant data required to complete the accompanying exercises and computing problems, which are presented at the end of each chapter. It also discusses index numbers and inequality indices in detail, since these are of particular importance to students and commonly omitted in textbooks. Throughout the text it is assumed that the student has no prior knowledge of statistics. It is aimed primarily at business and economics undergraduates, providing them with the basic statistical skills necessary for further study of their subject. However, students of other disciplines will also find it relevant.
This volume presents selected peer-reviewed contributions from The International Work-Conference on Time Series, ITISE 2015, held in Granada, Spain, July 1-3, 2015. It discusses topics in time series analysis and forecasting, advanced methods and online learning in time series, high-dimensional and complex/big data time series as well as forecasting in real problems. The International Work-Conferences on Time Series (ITISE) provide a forum for scientists, engineers, educators and students to discuss the latest ideas and implementations in the foundations, theory, models and applications in the field of time series analysis and forecasting. It focuses on interdisciplinary and multidisciplinary research encompassing the disciplines of computer science, mathematics, statistics and econometrics.
This contributed volume applies spatial and space-time econometric methods to spatial interaction modeling. The first part of the book addresses general cutting-edge methodological questions in spatial econometric interaction modeling, which concern aspects such as coefficient interpretation, constrained estimation, and scale effects. The second part deals with technical solutions to particular estimation issues, such as intraregional flows, Bayesian PPML and VAR estimation. The final part presents a number of empirical applications, ranging from interregional tourism competition and domestic trade to space-time migration modeling and residential relocation.
This book focuses on the application of revenue management in the manufacturing industry. Though previous books have extensively studied the application of revenue management in the service industry, little attention has been paid to its application in manufacturing, despite the fact that applying it in this context can be highly profitable and instrumental to corporate success. With this work, the author demonstrates that the manufacturing industry also fulfills the prerequisites for the application of revenue management. The book includes a summary of empirical studies that effectively illustrate how revenue management is currently being applied across Europe and North America, and what the profit potential is.
This book explores widely used seasonal adjustment methods and recent developments in real time trend-cycle estimation. It discusses in detail the properties and limitations of X12ARIMA, TRAMO-SEATS and STAMP - the main seasonal adjustment methods used by statistical agencies. Several real-world cases illustrate each method and real data examples can be followed throughout the text. The trend-cycle estimation is presented using nonparametric techniques based on moving averages, linear filters and reproducing kernel Hilbert spaces, taking recent advances into account. The book provides a systematical treatment of results that to date have been scattered throughout the literature. Seasonal adjustment and real time trend-cycle prediction play an essential part at all levels of activity in modern economies. They are used by governments to counteract cyclical recessions, by central banks to control inflation, by decision makers for better modeling and planning and by hospitals, manufacturers, builders, transportation, and consumers in general to decide on appropriate action. This book appeals to practitioners in government institutions, finance and business, macroeconomists, and other professionals who use economic data as well as academic researchers in time series analysis, seasonal adjustment methods, filtering and signal extraction. It is also useful for graduate and final-year undergraduate courses in econometrics and time series with a good understanding of linear regression and matrix algebra, as well as ARIMA modelling.
This book presents a comprehensive study of adoption and diffusion of technology in developing countries in a historical perspective. Combining the development of growth trajectories of the Indian economy in general and its manufacturing industry in particular, the book highlights the effective marriage between qualitative and quantitative methods for a better understanding and explaining of many hidden dynamic behaviors of adoption and diffusion trend in manufacturing industry. The use of various econometric methods is aimed to equip readers to make a judgement of the current state of diffusion pattern of new technologies in India and simulate a desirable future pattern in view of the various pro-industrial growth policies.
This text presents modern developments in time series analysis and focuses on their application to economic problems. The book first introduces the fundamental concept of a stationary time series and the basic properties of covariance, investigating the structure and estimation of autoregressive-moving average (ARMA) models and their relations to the covariance structure. The book then moves on to non-stationary time series, highlighting its consequences for modeling and forecasting and presenting standard statistical tests and regressions. Next, the text discusses volatility models and their applications in the analysis of financial market data, focusing on generalized autoregressive conditional heteroskedastic (GARCH) models. The second part of the text devoted to multivariate processes, such as vector autoregressive (VAR) models and structural vector autoregressive (SVAR) models, which have become the main tools in empirical macroeconomics. The text concludes with a discussion of co-integrated models and the Kalman Filter, which is being used with increasing frequency. Mathematically rigorous, yet application-oriented, this self-contained text will help students develop a deeper understanding of theory and better command of the models that are vital to the field. Assuming a basic knowledge of statistics and/or econometrics, this text is best suited for advanced undergraduate and beginning graduate students.
The primary objective of this book is to study some of the research topics in the area of analysis of complex surveys which have not been covered in any book yet. It discusses the analysis of categorical data using three models: a full model, a log-linear model and a logistic regression model. It is a valuable resource for survey statisticians and practitioners in the field of sociology, biology, economics, psychology and other areas who have to use these procedures in their day-to-day work. It is also useful for courses on sampling and complex surveys at the upper-undergraduate and graduate levels. The importance of sample surveys today cannot be overstated. From voters' behaviour to fields such as industry, agriculture, economics, sociology, psychology, investigators generally resort to survey sampling to obtain an assessment of the behaviour of the population they are interested in. Many large-scale sample surveys collect data using complex survey designs like multistage stratified cluster designs. The observations using these complex designs are not independently and identically distributed - an assumption on which the classical procedures of inference are based. This means that if classical tests are used for the analysis of such data, the inferences obtained will be inconsistent and often invalid. For this reason, many modified test procedures have been developed for this purpose over the last few decades.
This book presents a comprehensive study of multivariate time series with linear state space structure. The emphasis is put on both the clarity of the theoretical concepts and on efficient algorithms for implementing the theory. In particular, it investigates the relationship between VARMA and state space models, including canonical forms. It also highlights the relationship between Wiener-Kolmogorov and Kalman filtering both with an infinite and a finite sample. The strength of the book also lies in the numerous algorithms included for state space models that take advantage of the recursive nature of the models. Many of these algorithms can be made robust, fast, reliable and efficient. The book is accompanied by a MATLAB package called SSMMATLAB and a webpage presenting implemented algorithms with many examples and case studies. Though it lays a solid theoretical foundation, the book also focuses on practical application, and includes exercises in each chapter. It is intended for researchers and students working with linear state space models, and who are familiar with linear algebra and possess some knowledge of statistics.
Interest in the skew-normal and related families of distributions has grown enormously over recent years, as theory has advanced, challenges of data have grown, and computational tools have made substantial progress. This comprehensive treatment, blending theory and practice, will be the standard resource for statisticians and applied researchers. Assuming only basic knowledge of (non-measure-theoretic) probability and statistical inference, the book is accessible to the wide range of researchers who use statistical modelling techniques. Guiding readers through the main concepts and results, it covers both the probability and the statistics sides of the subject, in the univariate and multivariate settings. The theoretical development is complemented by numerous illustrations and applications to a range of fields including quantitative finance, medical statistics, environmental risk studies, and industrial and business efficiency. The author's freely available R package sn, available from CRAN, equips readers to put the methods into action with their own data.
This volume presents some of the most influential papers published by Rabi N. Bhattacharya, along with commentaries from international experts, demonstrating his knowledge, insight, and influence in the field of probability and its applications. For more than three decades, Bhattacharya has made significant contributions in areas ranging from theoretical statistics via analytical probability theory, Markov processes, and random dynamics to applied topics in statistics, economics, and geophysics. Selected reprints of Bhattacharya's papers are divided into three sections: Modes of Approximation, Large Times for Markov Processes, and Stochastic Foundations in Applied Sciences. The accompanying articles by the contributing authors not only help to position his work in the context of other achievements, but also provide a unique assessment of the state of their individual fields, both historically and for the next generation of researchers. Rabi N. Bhattacharya: Selected Papers will be a valuable resource for young researchers entering the diverse areas of study to which Bhattacharya has contributed. Established researchers will also appreciate this work as an account of both past and present developments and challenges for the future.
This book highlights the latest research findings from the 46th International Meeting of the Italian Statistical Society (SIS) in Rome, during which both methodological and applied statistical research was discussed. This selection of fully peer-reviewed papers, originally presented at the meeting, addresses a broad range of topics, including the theory of statistical inference; data mining and multivariate statistical analysis; survey methodologies; analysis of social, demographic and health data; and economic statistics and econometrics.
In this monograph the authors give a systematic approach to the probabilistic properties of the fixed point equation X=AX+B. A probabilistic study of the stochastic recurrence equation X_t=A_tX_{t-1}+B_t for real- and matrix-valued random variables A_t, where (A_t,B_t) constitute an iid sequence, is provided. The classical theory for these equations, including the existence and uniqueness of a stationary solution, the tail behavior with special emphasis on power law behavior, moments and support, is presented. The authors collect recent asymptotic results on extremes, point processes, partial sums (central limit theory with special emphasis on infinite variance stable limit theory), large deviations, in the univariate and multivariate cases, and they further touch on the related topics of smoothing transforms, regularly varying sequences and random iterative systems. The text gives an introduction to the Kesten-Goldie theory for stochastic recurrence equations of the type X_t=A_tX_{t-1}+B_t. It provides the classical results of Kesten, Goldie, Guivarc'h, and others, and gives an overview of recent results on the topic. It presents the state-of-the-art results in the field of affine stochastic recurrence equations and shows relations with non-affine recursions and multivariate regular variation.
The revised Fourth Edition of this popular textbook is redesigned with Excel 2016 to encourage business students to develop competitive advantages for use in their future careers as decision makers. Students learn to build models using logic and experience, produce statistics using Excel 2016 with shortcuts, and translate results into implications for decision makers. The textbook features new examples and assignments on global markets, including cases featuring Chipotle and Costco. A number of examples focus on business in emerging global markets with particular emphasis on emerging markets in Latin America, China, and India. Results are linked to implications for decision making with sensitivity analyses to illustrate how alternate scenarios can be compared. The author emphasises communicating results effectively in plain English and with screenshots and compelling graphics in the form of memos and PowerPoints. Chapters include screenshots to make it easy to conduct analyses in Excel 2016. PivotTables and PivotCharts, used frequently in business, are introduced from the start. The Fourth Edition features Monte Carlo simulation in four chapters, as a tool to illustrate the range of possible outcomes from decision makers' assumptions and underlying uncertainties. Model building with regression is presented as a process, adding levels of sophistication, with chapters on multicollinearity and remedies, forecasting and model validation, auto-correlation and remedies, indicator variables to represent segment differences, and seasonality, structural shifts or shocks in time series models. Special applications in market segmentation and portfolio analysis are offered, and an introduction to conjoint analysis is included. Nonlinear models are motivated with arguments of diminishing or increasing marginal response.
This book covers all the topics found in introductory descriptive statistics courses, including simple linear regression and time series analysis, the fundamentals of inferential statistics (probability theory, random sampling and estimation theory), and inferential statistics itself (confidence intervals, testing). Each chapter starts with the necessary theoretical background, which is followed by a variety of examples. The core examples are based on the content of the respective chapter, while the advanced examples, designed to deepen students' knowledge, also draw on information and material from previous chapters. The enhanced online version helps students grasp the complexity and the practical relevance of statistical analysis through interactive examples and is suitable for undergraduate and graduate students taking their first statistics courses, as well as for undergraduate students in non-mathematical fields, e.g. economics, the social sciences etc.
This book presents a systematic overview of cutting-edge research in the field of parametric modeling of personal income and wealth distribution, which allows one to represent how income/wealth is distributed within a given population. The estimated parameters may be used to gain insights into the causes of the evolution of income/wealth distribution over time, or to interpret the differences between distributions across countries. Moreover, once a given parametric model has been fitted to a data set, one can straightforwardly compute inequality and poverty measures. Finally, estimated parameters may be used in empirical modeling of the impact of macroeconomic conditions on the evolution of personal income/wealth distribution. In reviewing the state of the art in the field, the authors provide a thorough discussion of parametric models belonging to the " -generalized" family, a new and fruitful set of statistical models for the size distribution of income and wealth that they have developed over several years of collaborative and multidisciplinary research. This book will be of interest to all who share the belief that problems of income and wealth distribution merit detailed conceptual and methodological attention.
This book is the outcome of the CIMPA School on Statistical Methods and Applications in Insurance and Finance, held in Marrakech and Kelaat M'gouna (Morocco) in April 2013. It presents two lectures and seven refereed papers from the school, offering the reader important insights into key topics. The first of the lectures, by Frederic Viens, addresses risk management via hedging in discrete and continuous time, while the second, by Boualem Djehiche, reviews statistical estimation methods applied to life and disability insurance. The refereed papers offer diverse perspectives and extensive discussions on subjects including optimal control, financial modeling using stochastic differential equations, pricing and hedging of financial derivatives, and sensitivity analysis. Each chapter of the volume includes a comprehensive bibliography to promote further research.
An Introduction to Machine Learning in Finance, With Mathematical Background, Data Visualization, and R Nonparametric function estimation is an important part of machine learning, which is becoming increasingly important in quantitative finance. Nonparametric Finance provides graduate students and finance professionals with a foundation in nonparametric function estimation and the underlying mathematics. Combining practical applications, mathematically rigorous presentation, and statistical data analysis into a single volume, this book presents detailed instruction in discrete chapters that allow readers to dip in as needed without reading from beginning to end. Coverage includes statistical finance, risk management, portfolio management, and securities pricing to provide a practical knowledge base, and the introductory chapter introduces basic finance concepts for readers with a strictly mathematical background. Economic significance is emphasized over statistical significance throughout, and R code is provided to help readers reproduce the research, computations, and figures being discussed. Strong graphical content clarifies the methods and demonstrates essential visualization techniques, while deep mathematical and statistical insight backs up practical applications. Written for the leading edge of finance, Nonparametric Finance: - Introduces basic statistical finance concepts, including univariate and multivariate data analysis, time series analysis, and prediction - Provides risk management guidance through volatility prediction, quantiles, and value-at-risk - Examines portfolio theory, performance measurement, Markowitz portfolios, dynamic portfolio selection, and more - Discusses fundamental theorems of asset pricing, Black-Scholes pricing and hedging, quadratic pricing and hedging, option portfolios, interest rate derivatives, and other asset pricing principles - Provides supplementary R code and numerous graphics to reinforce complex content Nonparametric function estimation has received little attention in the context of risk management and option pricing, despite its useful applications and benefits. This book provides the essential background and practical knowledge needed to take full advantage of these little-used methods, and turn them into real-world advantage. Jussi Klemela, PhD, is Adjunct Professor at the University of Oulu. His research interests include nonparametric function estimation, density estimation, and data visualization. He is the author of Smoothing of Multivariate Data: Density Estimation and Visualization and Multivariate Nonparametric Regression and Visualization: With R and Applications to Finance.
Institutions are the formal or informal 'rules of the game' that facilitate economic, social, and political interactions. These include such things as legal rules, property rights, constitutions, political structures, and norms and customs. The main theoretical insights from Austrian economics regarding private property rights and prices, entrepreneurship, and spontaneous order mechanisms play a key role in advancing institutional economics. The Austrian economics framework provides an understanding for which institutions matter for growth, how they matter, and how they emerge and can change over time. Specifically, Austrians have contributed significantly to the areas of institutional stickiness and informal institutions, self-governance and self-enforcing contracts, institutional entrepreneurship, and the political infrastructure for development.
One of the major problems of macroeconomic theory is the way in which the people exchange goods in decentralized market economies. There are major disagreements among macroeconomists regarding tools to influence required outcomes. Since the mainstream efficient market theory fails to provide an internal coherent framework, there is a need for an alternative theory. The book provides an innovative approach for the analysis of agent based models, populated by the heterogeneous and interacting agents in the field of financial fragility. The text is divided in two parts; the first presents analytical developments of stochastic aggregation and macro-dynamics inference methods. The second part introduces macroeconomic models of financial fragility for complex systems populated by heterogeneous and interacting agents. The concepts of financial fragility and macroeconomic dynamics are explained in detail in separate chapters. The statistical physics approach is applied to explain theories of macroeconomic modelling and inference.
Offers a practical introduction to regression modeling with spatial and spatial-temporal data relevant to research and teaching in the social and economic sciences Focuses on a few key datasets and data analysis using the open source software WinBUGS, R, and GeoDa Provides data and programming codes to allow users to undertake their own analyses Ends each chapter with a set of short exercises and questions for further study
Originally published in 1954, on behalf of the National Institute of Economic and Social Research, this book presents a general review of British economic statistics in relation to the uses made of them for policy purposes. The text begins with an examination, in general terms, of the ways in which statistics can help in guiding or assessing policy, covering housing, coal, the development areas, agricultural price-fixing, the balance of external payments and the balance of the economy. The problems of statistical application are then separately discussed under the headings of quality, presentation and availability, and organization. A full bibliography and reference table of principal British economic statistics are also included. This book will be of value to anyone with an interest in British economic history and statistics. |
You may like...
Fat Chance - Probability from 0 to 1
Benedict Gross, Joe Harris, …
Hardcover
R1,923
Discovery Miles 19 230
Operations And Supply Chain Management
David Collier, James Evans
Hardcover
Inventories in National Economies - A…
Attila Chik an, Erzsebet Kovacs, …
Hardcover
R2,124
Discovery Miles 21 240
Quantitative statistical techniques
Swanepoel Swanepoel, Vivier Vivier, …
Paperback
(2)R718 Discovery Miles 7 180
Kwantitatiewe statistiese tegnieke
Swanepoel Swanepoel, Vivier Vivier, …
Book
R482
Discovery Miles 4 820
|