Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Business & Economics > Economics > Econometrics > Economic statistics
This book presents a critical review of the empirical literature that studies the efficiency of the forward and futures markets for foreign exchange. It provides a useful foundation for research in developing quantitative measures of risk and expected return in international finance.
This book contains the most complete set of the Chinese national income and its components based on system of national accounts. It points out some fundamental issues concerning the estimation of China's national income and it is intended to the students of the field of China study around the world.
Originally published in 1984. This book brings together a reasonably complete set of results regarding the use of Constraint Item estimation procedures under the assumption of accurate specification. The analysis covers the case of all explanatory variables being non-stochastic as well as the case of identified simultaneous equations, with error terms known and unknown. Particular emphasis is given to the derivation of criteria for choosing the Constraint Item. Part 1 looks at the best CI estimators and Part 2 examines equation by equation estimation, considering forecasting accuracy.
There is no book currently available that gives a comprehensive treatment of the design, construction, and use of index numbers. However, there is a pressing need for one in view of the increasing and more sophisticated employment of index numbers in the whole range of applied economics and specifically in discussions of macroeconomic policy. In this book, R. G. D. Allen meets this need in simple and consistent terms and with comprehensive coverage. The text begins with an elementary survey of the index-number problem before turning to more detailed treatments of the theory and practice of index numbers. The binary case in which one time period is compared with another is first developed and illustrated with numerous examples. This is to prepare the ground for the central part of the text on runs of index numbers. Particular attention is paid both to fixed-weighted and to chain forms as used in a wide range of published index numbers taken mainly from British official sources. This work deals with some further problems in the construction of index numbers, problems which are both troublesome and largely unresolved. These include the use of sampling techniques in index-number design and the theoretical and practical treatment of quality changes. It is also devoted to a number of detailed and specific applications of index-number techniques to problems ranging from national-income accounting, through the measurement of inequality of incomes and international comparisons of real incomes, to the use of index numbers of stock-market prices. Aimed primarily at students of economics, whatever their age and range of interests, this work will also be of use to those who handle index numbers professionally. "R. G. D. Allen" (1906-1983) was Professor Emeritus at the University of London. He was also once president of the Royal Statistical Society and Treasurer of the British Academy where he was a fellow. He is the author of "Basic Mathematics," "Mathematical Analysis for Economists," "Mathematical Economics" and "Macroeconomic Theory."
Accessible to a general audience with some background in statistics and computing Many examples and extended case studies Illustrations using R and Rstudio A true blend of statistics and computer science -- not just a grab bag of topics from each
The first part of this book discusses institutions and mechanisms of algorithmic trading, market microstructure, high-frequency data and stylized facts, time and event aggregation, order book dynamics, trading strategies and algorithms, transaction costs, market impact and execution strategies, risk analysis, and management. The second part covers market impact models, network models, multi-asset trading, machine learning techniques, and nonlinear filtering. The third part discusses electronic market making, liquidity, systemic risk, recent developments and debates on the subject.
First published in 1995. In the current, increasingly global economy, investors require quick access to a wide range of financial and investment-related statistics to assist them in better understanding the macroeconomic environment in which their investments will operate. The International Financial Statistics Locator eliminates the need to search though a number of sources to identify those that contain much of this statistical information. It is intended for use by librarians, students, individual investors, and the business community and provides access to twenty-two resources, print and electronic, that contain current and historical financial and economic statistics investors need to appreciate and profit from evolving and established international markets.
Factor Analysis and Dimension Reduction in R provides coverage, with worked examples, of a large number of dimension reduction procedures along with model performance metrics to compare them. Factor analysis in the form of principal components analysis (PCA) or principal factor analysis (PFA) is familiar to most social scientists. However, what is less familiar is understanding that factor analysis is a subset of the more general statistical family of dimension reduction methods. The social scientist's toolkit for factor analysis problems can be expanded to include the range of solutions this book presents. In addition to covering FA and PCA with orthogonal and oblique rotation, this book's coverage includes higher-order factor models, bifactor models, models based on binary and ordinal data, models based on mixed data, generalized low-rank models, cluster analysis with GLRM, models involving supplemental variables or observations, Bayesian factor analysis, regularized factor analysis, testing for unidimensionality, and prediction with factor scores. The second half of the book deals with other procedures for dimension reduction. These include coverage of kernel PCA, factor analysis with multidimensional scaling, locally linear embedding models, Laplacian eigenmaps, diffusion maps, force directed methods, t-distributed stochastic neighbor embedding, independent component analysis (ICA), dimensionality reduction via regression (DRR), non-negative matrix factorization (NNMF), Isomap, Autoencoder, uniform manifold approximation and projection (UMAP) models, neural network models, and longitudinal factor analysis models. In addition, a special chapter covers metrics for comparing model performance. Features of this book include: Numerous worked examples with replicable R code Explicit comprehensive coverage of data assumptions Adaptation of factor methods to binary, ordinal, and categorical data Residual and outlier analysis Visualization of factor results Final chapters that treat integration of factor analysis with neural network and time series methods Presented in color with R code and introduction to R and RStudio, this book will be suitable for graduate-level and optional module courses for social scientists, and on quantitative methods and multivariate statistics courses.
The most authoritative and up-to-date core econometrics textbook available Econometrics is the quantitative language of economic theory, analysis, and empirical work, and it has become a cornerstone of graduate economics programs. Econometrics provides graduate and PhD students with an essential introduction to this foundational subject in economics and serves as an invaluable reference for researchers and practitioners. This comprehensive textbook teaches fundamental concepts, emphasizes modern, real-world applications, and gives students an intuitive understanding of econometrics. Covers the full breadth of econometric theory and methods with mathematical rigor while emphasizing intuitive explanations that are accessible to students of all backgrounds Draws on integrated, research-level datasets, provided on an accompanying website Discusses linear econometrics, time series, panel data, nonparametric methods, nonlinear econometric models, and modern machine learning Features hundreds of exercises that enable students to learn by doing Includes in-depth appendices on matrix algebra and useful inequalities and a wealth of real-world examples Can serve as a core textbook for a first-year PhD course in econometrics and as a follow-up to Bruce E. Hansen's Probability and Statistics for Economists
Business students need the ability to think statistically about how to deal with uncertainty and its effect on decision-making in business and management. Traditional statistics courses and textbooks tend to focus on probability, mathematical detail, and heavy computation, and thus fail to meet the needs of future managers. Statistical Thinking in Business, Second Edition responds to the growing recognition that we must change the way business statistics is taught. It shows how statistics is important in all aspects of business and equips students with the skills they need to make sensible use of data and other information. The authors take an interactive, scenario-based approach and use almost no mathematical formulas, opting to use Excel for the technical work. This allows them to focus on using statistics to aid decision-making rather than how to perform routine calculations. New in the Second Edition A completely revised chapter on forecasting Re-arrangement of the material on data presentation with the inclusion of histograms and cumulative line plots A more thorough discussion of the analysis of attribute data Coverage of variable selection and model building in multiple regression End-of-chapter summaries More end-of-chapter problems A variety of case studies throughout the book The second edition also comes with a wealth of ancillary materials provided on downloadable resources packaged with the book. These include automatically-marked multiple-choice questions, answers to questions in the text, data sets, Excel experiments and demonstrations, an introduction to Excel, and the StiBstat Add-In for stem and leaf plots, box plots, distribution plots, control charts and summary statistics.
Originally published in 1984. This book brings together a reasonably complete set of results regarding the use of Constraint Item estimation procedures under the assumption of accurate specification. The analysis covers the case of all explanatory variables being non-stochastic as well as the case of identified simultaneous equations, with error terms known and unknown. Particular emphasis is given to the derivation of criteria for choosing the Constraint Item. Part 1 looks at the best CI estimators and Part 2 examines equation by equation estimation, considering forecasting accuracy.
Key Topics in Clinical Research aims to provide a short, clear, highlighted reference to guide trainees and trainers through research and audit projects, from first idea, through to data collection and statistical analysis, to presentation and publication. This book is also designed to assist trainees in preparing for their specialty examinations by providing comprehensive, concise, easily accessible and easily understandable information on all aspects of clinical research and audit.
Since most datasets contain a number of variables, multivariate methods are helpful in answering a variety of research questions. Accessible to students and researchers without a substantial background in statistics or mathematics, Essentials of Multivariate Data Analysis explains the usefulness of multivariate methods in applied research. Unlike most books on multivariate methods, this one makes straightforward analyses easy to perform for those who are unfamiliar with advanced mathematical formulae. An easily understood dataset is used throughout to illustrate the techniques. The accompanying add-in for Microsoft Excel can be used to carry out the analyses in the text. The dataset and Excel add-in are available for download on the book's CRC Press web page. Providing a firm foundation in the most commonly used multivariate techniques, this text helps readers choose the appropriate method, learn how to apply it, and understand how to interpret the results. It prepares them for more complex analyses using software such as Minitab, R, SAS, SPSS, and Stata.
In order to make informed decisions, there are three important elements: intuition, trust, and analytics. Intuition is based on experiential learning and recent research has shown that those who rely on their "gut feelings" may do better than those who don't. Analytics, however, are important in a data-driven environment to also inform decision making. The third element, trust, is critical for knowledge sharing to take place. These three elements-intuition, analytics, and trust-make a perfect combination for decision making. This book gathers leading researchers who explore the role of these three elements in the process of decision-making.
This book focuses on the application of the partial hedging approach from modern math finance to equity-linked life insurance contracts. It provides an accessible, up-to-date introduction to quantifying financial and insurance risks. The book also explains how to price innovative financial and insurance products from partial hedging perspectives. Each chapter presents the problem, the mathematical formulation, theoretical results, derivation details, numerical illustrations, and references to further reading.
Customer and Business Analytics: Applied Data Mining for Business Decision Making Using R explains and demonstrates, via the accompanying open-source software, how advanced analytical tools can address various business problems. It also gives insight into some of the challenges faced when deploying these tools. Extensively classroom-tested, the text is ideal for students in customer and business analytics or applied data mining as well as professionals in small- to medium-sized organizations. The book offers an intuitive understanding of how different analytics algorithms work. Where necessary, the authors explain the underlying mathematics in an accessible manner. Each technique presented includes a detailed tutorial that enables hands-on experience with real data. The authors also discuss issues often encountered in applied data mining projects and present the CRISP-DM process model as a practical framework for organizing these projects. Showing how data mining can improve the performance of organizations, this book and its R-based software provide the skills and tools needed to successfully develop advanced analytics capabilities.
Written in a highly accessible style, A Factor Model Approach to Derivative Pricing lays a clear and structured foundation for the pricing of derivative securities based upon simple factor model related absence of arbitrage ideas. This unique and unifying approach provides for a broad treatment of topics and models, including equity, interest-rate, and credit derivatives, as well as hedging and tree-based computational methods, but without reliance on the heavy prerequisites that often accompany such topics. Key features A single fundamental absence of arbitrage relationship based on factor models is used to motivate all the results in the book A structured three-step procedure is used to guide the derivation of absence of arbitrage equations and illuminate core underlying concepts Brownian motion and Poisson process driven models are treated together, allowing for a broad and cohesive presentation of topics The final chapter provides a new approach to risk neutral pricing that introduces the topic as a seamless and natural extension of the factor model approach Whether being used as text for an intermediate level course in derivatives, or by researchers and practitioners who are seeking a better understanding of the fundamental ideas that underlie derivative pricing, readers will appreciate the book's ability to unify many disparate topics and models under a single conceptual theme. James A Primbs is an Associate Professor of Finance at the Mihaylo College of Business and Economics at California State University, Fullerton.
Originally published in 1987. This collection of original papers deals with various issues of specification in the context of the linear statistical model. The volume honours the early econometric work of Donald Cochrane, late Dean of Economics and Politics at Monash University in Australia. The chapters focus on problems associated with autocorrelation of the error term in the linear regression model and include appraisals of early work on this topic by Cochrane and Orcutt. The book includes an extensive survey of autocorrelation tests; some exact finite-sample tests; and some issues in preliminary test estimation. A wide range of other specification issues is discussed, including the implications of random regressors for Bayesian prediction; modelling with joint conditional probability functions; and results from duality theory. There is a major survey chapter dealing with specification tests for non-nested models, and some of the applications discussed by the contributors deal with the British National Accounts and with Australian financial and housing markets.
Have configurations of labour-management practices become embedded in the British economy? Did the dramatic decline in trade union representation in the 1980s continue throughout the 1990s, leaving more employees without a voice? Were the vestiges of union organization at the workplace a hollow shell? These and other contemporary issues of employee relations are addressed in this report. The book reports the results from the series of workplace surveys conducted by the Department of Trade and Industry, the Economic and Social Research Council, The Advisory Conciliation and Arbitration Service, and the Policy Studies Institute. Its focus is on change, captured by gathering together the enormous bank of data from all four of the large-scale and highly respected surveys, and plotting trends from 1980 to 1999. In addition, a special panel of workplaces, surveyed in both 1990 and 1998, reveals the complex processes of change.;Comprehensive in scope, the results are statistically reliable and reveal the nature and extent of change in all bar the smallest British workplaces.
Have configurations of labour-management practices become embedded in the British economy? Did the dramatic decline in trade union representation in the 1980s continue throughout the 1990s, leaving more employees without a voice? Were the vestiges of union organization at the workplace a hollow shell? These and other contemporary issues of employee relations are addressed in this report. The book reports the results from the series of workplace surveys conducted by the Department of Trade and Industry, the Economic and Social Research Council, The Advisory Conciliation and Arbitration Service, and the Policy Studies Institute. Its focus is on change, captured by gathering together the enormous bank of data from all four of the large-scale and highly respected surveys, and plotting trends from 1980 to 1999. In addition, a special panel of workplaces, surveyed in both 1990 and 1998, reveals the complex processes of change.;Comprehensive in scope, the results are statistically reliable and reveal the nature and extent of change in all bar the smallest British workplaces.
This book provides a broad, mature, and systematic introduction to current financial econometric models and their applications to modeling and prediction of financial time series data. It utilizes real-world examples and real financial data throughout the book to apply the models and methods described. The author begins with basic characteristics of financial time series data before covering three main topics: Analysis and application of univariate financial time seriesThe return series of multiple assetsBayesian inference in finance methods Key features of the new edition include additional coverage of modern day topics such as arbitrage, pair trading, realized volatility, and credit risk modeling; a smooth transition from S-Plus to R; and expanded empirical financial data sets. The overall objective of the book is to provide some knowledge of financial time series, introduce some statistical tools useful for analyzing these series and gain experience in financial applications of various econometric methods.
Human behavior often violates the predictions of rational choice
theory. This realization has caused many social psychologists and
experimental economists to attempt to develop an
experimentally-based variant of game theory as an alternative
descriptive model. The impetus for this book is the interest in the
development of such a theory that combines elements from both
disciplines and appeals to both.
This short book introduces the main ideas of statistical inference in a way that is both user friendly and mathematically sound. Particular emphasis is placed on the common foundation of many models used in practice. In addition, the book focuses on the formulation of appropriate statistical models to study problems in business, economics, and the social sciences, as well as on how to interpret the results from statistical analyses. The book will be useful to students who are interested in rigorous applications of statistics to problems in business, economics and the social sciences, as well as students who have studied statistics in the past, but need a more solid grounding in statistical techniques to further their careers. Jacco Thijssen is professor of finance at the University of York, UK. He holds a PhD in mathematical economics from Tilburg University, Netherlands. His main research interests are in applications of optimal stopping theory, stochastic calculus, and game theory to problems in economics and finance. Professor Thijssen has earned several awards for his statistics teaching.
This work examines theoretical issues, as well as practical developments in statistical inference related to econometric models and analysis. This work offers discussions on such areas as the function of statistics in aggregation, income inequality, poverty, health, spatial econometrics, panel and survey data, bootstrapping and time series.
A Handbook of Statistical Analyses Using SPSS clearly describes how to conduct a range of univariate and multivariate statistical analyses using the latest version of the Statistical Package for the Social Sciences, SPSS 11. Each chapter addresses a different type of analytical procedure applied to one or more data sets, primarily from the social and behavioral sciences areas. Each chapter also contains exercises relating to the data sets introduced, providing readers with a means to develop both their SPSS and statistical skills. Model answers to the exercises are also provided. Readers can download all of the data sets from a companion Web site furnished by the authors. |
You may like...
Operations And Supply Chain Management
David Collier, James Evans
Hardcover
Risky Business - Why Insurance Markets…
Liran Einav, Amy Finkelstein, …
Paperback
|