![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > Economic statistics
Factor Analysis and Dimension Reduction in R provides coverage, with worked examples, of a large number of dimension reduction procedures along with model performance metrics to compare them. Factor analysis in the form of principal components analysis (PCA) or principal factor analysis (PFA) is familiar to most social scientists. However, what is less familiar is understanding that factor analysis is a subset of the more general statistical family of dimension reduction methods. The social scientist's toolkit for factor analysis problems can be expanded to include the range of solutions this book presents. In addition to covering FA and PCA with orthogonal and oblique rotation, this book's coverage includes higher-order factor models, bifactor models, models based on binary and ordinal data, models based on mixed data, generalized low-rank models, cluster analysis with GLRM, models involving supplemental variables or observations, Bayesian factor analysis, regularized factor analysis, testing for unidimensionality, and prediction with factor scores. The second half of the book deals with other procedures for dimension reduction. These include coverage of kernel PCA, factor analysis with multidimensional scaling, locally linear embedding models, Laplacian eigenmaps, diffusion maps, force directed methods, t-distributed stochastic neighbor embedding, independent component analysis (ICA), dimensionality reduction via regression (DRR), non-negative matrix factorization (NNMF), Isomap, Autoencoder, uniform manifold approximation and projection (UMAP) models, neural network models, and longitudinal factor analysis models. In addition, a special chapter covers metrics for comparing model performance. Features of this book include: Numerous worked examples with replicable R code Explicit comprehensive coverage of data assumptions Adaptation of factor methods to binary, ordinal, and categorical data Residual and outlier analysis Visualization of factor results Final chapters that treat integration of factor analysis with neural network and time series methods Presented in color with R code and introduction to R and RStudio, this book will be suitable for graduate-level and optional module courses for social scientists, and on quantitative methods and multivariate statistics courses.
Delivering cutting-edge coverage that includes the latest thinking and practices from the field, QUALITY AND PERFORMANCE EXCELLENCE, 8e presents the basic principles and tools associated with quality and performance excellence. Packed with relevant, real-world examples, the text thoroughly illustrates how these principles and methods have been put into effect in a variety of organizations. It also highlights the relationship between basic principles and the popular theories and models studied in management courses. The eighth edition reflects the 2015-16 Baldrige criteria and includes new boxed features, experiential exercises, and up-to-date case studies that give you practical experience working with real-world issues. Many cases focus on large and small companies in manufacturing and service industries in North and South America, Europe, and Asia-Pacific. In addition, chapters now open with a "Performance Excellence Profile" highlighting a recent Baldrige recipient.
Statistical Programming in SAS Second Edition provides a foundation for programming to implement statistical solutions using SAS, a system that has been used to solve data analytic problems for more than 40 years. The author includes motivating examples to inspire readers to generate programming solutions. Upper-level undergraduates, beginning graduate students, and professionals involved in generating programming solutions for data-analytic problems will benefit from this book. The ideal background for a reader is some background in regression modeling and introductory experience with computer programming. The coverage of statistical programming in the second edition includes Getting data into the SAS system, engineering new features, and formatting variables Writing readable and well-documented code Structuring, implementing, and debugging programs that are well documented Creating solutions to novel problems Combining data sources, extracting parts of data sets, and reshaping data sets as needed for other analyses Generating general solutions using macros Customizing output Producing insight-inspiring data visualizations Parsing, processing, and analyzing text Programming solutions using matrices and connecting to R Processing text Programming with matrices Connecting SAS with R Covering topics that are part of both base and certification exams.
The book focuses on problem solving for practitioners and model building for academicians under multivariate situations. This book helps readers in understanding the issues, such as knowing variability, extracting patterns, building relationships, and making objective decisions. A large number of multivariate statistical models are covered in the book. The readers will learn how a practical problem can be converted to a statistical problem and how the statistical solution can be interpreted as a practical solution. Key features: Links data generation process with statistical distributions in multivariate domain Provides step by step procedure for estimating parameters of developed models Provides blueprint for data driven decision making Includes practical examples and case studies relevant for intended audiences The book will help everyone involved in data driven problem solving, modeling and decision making.
Formal Models of Domestic Politics offers a unified and accessible approach to canonical and important new models of politics. Intended for political science and economics students who have already taken a course in game theory, this new edition retains the widely appreciated pedagogic approach of the first edition. Coverage has been expanded to include a new chapter on nondemocracy; new material on valance and issue ownership, dynamic veto and legislative bargaining, delegation to leaders by imperfectly informed politicians, and voter competence; and numerous additional exercises. Political economists, comparativists, and Americanists will all find models in the text central to their research interests. This leading graduate textbook assumes no mathematical knowledge beyond basic calculus, with an emphasis placed on clarity of presentation. Political scientists will appreciate the simplification of economic environments to focus on the political logic of models; economists will discover many important models published outside of their discipline; and both instructors and students will value the classroom-tested exercises. This is a vital update to a classic text.
Time Series: A First Course with Bootstrap Starter provides an introductory course on time series analysis that satisfies the triptych of (i) mathematical completeness, (ii) computational illustration and implementation, and (iii) conciseness and accessibility to upper-level undergraduate and M.S. students. Basic theoretical results are presented in a mathematically convincing way, and the methods of data analysis are developed through examples and exercises parsed in R. A student with a basic course in mathematical statistics will learn both how to analyze time series and how to interpret the results. The book provides the foundation of time series methods, including linear filters and a geometric approach to prediction. The important paradigm of ARMA models is studied in-depth, as well as frequency domain methods. Entropy and other information theoretic notions are introduced, with applications to time series modeling. The second half of the book focuses on statistical inference, the fitting of time series models, as well as computational facets of forecasting. Many time series of interest are nonlinear in which case classical inference methods can fail, but bootstrap methods may come to the rescue. Distinctive features of the book are the emphasis on geometric notions and the frequency domain, the discussion of entropy maximization, and a thorough treatment of recent computer-intensive methods for time series such as subsampling and the bootstrap. There are more than 600 exercises, half of which involve R coding and/or data analysis. Supplements include a website with 12 key data sets and all R code for the book's examples, as well as the solutions to exercises.
This textbook provides future data analysts with the tools, methods, and skills needed to answer data-focused, real-life questions; to carry out data analysis; and to visualize and interpret results to support better decisions in business, economics, and public policy. Data wrangling and exploration, regression analysis, machine learning, and causal analysis are comprehensively covered, as well as when, why, and how the methods work, and how they relate to each other. As the most effective way to communicate data analysis, running case studies play a central role in this textbook. Each case starts with an industry-relevant question and answers it by using real-world data and applying the tools and methods covered in the textbook. Learning is then consolidated by 360 practice questions and 120 data exercises. Extensive online resources, including raw and cleaned data and codes for all analysis in Stata, R, and Python, can be found at www.gabors-data-analysis.com.
Doing Statistical Analysis looks at three kinds of statistical research questions - descriptive, associational, and inferential - and shows students how to conduct statistical analyses and interpret the results. Keeping equations to a minimum, it uses a conversational style and relatable examples such as football, COVID-19, and tourism, to aid understanding. Each chapter contains practice exercises, and a section showing students how to reproduce the statistical results in the book using Stata and SPSS. Digital supplements consist of data sets in Stata, SPSS, and Excel, and a test bank for instructors. Its accessible approach means this is the ideal textbook for undergraduate students across the social and behavioral sciences needing to build their confidence with statistical analysis.
Who decides how official statistics are produced? Do politicians have control or are key decisions left to statisticians in independent statistical agencies? Interviews with statisticians in Australia, Canada, Sweden, the UK and the USA were conducted to get insider perspectives on the nature of decision making in government statistical administration. While the popular adage suggests there are 'lies, damned lies and statistics', this research shows that official statistics in liberal democracies are far from mistruths; they are consistently insulated from direct political interference. Yet, a range of subtle pressures and tensions exist that governments and statisticians must manage. The power over statistics is distributed differently in different countries, and this book explains why. Differences in decision-making powers across countries are the result of shifting pressures politicians and statisticians face to be credible, and the different national contexts that provide distinctive institutional settings for the production of government numbers.
A unique and comprehensive source of information, this book is the only international publication providing economists, planners, policymakers and business people with worldwide statistics on current performance and trends in the manufacturing sector. The Yearbook is designed to facilitate international comparisons relating to manufacturing activity and industrial development and performance. It provides data which can be used to analyse patterns of growth and related long term trends, structural change and industrial performance in individual industries. Statistics on employment patterns, wages, consumption and gross output and other key indicators are also presented.
As one of the first texts to take a behavioral approach to macroeconomic expectations, this book introduces a new way of doing economics. Roetheli uses cognitive psychology in a bottom-up method of modeling macroeconomic expectations. His research is based on laboratory experiments and historical data, which he extends to real-world situations. Pattern extrapolation is shown to be the key to understanding expectations of inflation and income. The quantitative model of expectations is used to analyze the course of inflation and nominal interest rates in a range of countries and historical periods. The model of expected income is applied to the analysis of business cycle phenomena such as the great recession in the United States. Data and spreadsheets are provided for readers to do their own computations of macroeconomic expectations. This book offers new perspectives in many areas of macro and financial economics.
* A useful guide to financial product modeling and to minimizing business risk and uncertainty * Looks at wide range of financial assets and markets and correlates them with enterprises' profitability * Introduces advanced and novel machine learning techniques in finance such as Support Vector Machine, Neural Networks, Random Forest, K-Nearest Neighbors, Extreme Learning Machine, Deep Learning Approaches and applies them to analyze finance data sets * Real world applicable examples to further understanding
If you know a little bit about financial mathematics but don't yet know a lot about programming, then C++ for Financial Mathematics is for you. C++ is an essential skill for many jobs in quantitative finance, but learning it can be a daunting prospect. This book gathers together everything you need to know to price derivatives in C++ without unnecessary complexities or technicalities. It leads the reader step-by-step from programming novice to writing a sophisticated and flexible financial mathematics library. At every step, each new idea is motivated and illustrated with concrete financial examples. As employers understand, there is more to programming than knowing a computer language. As well as covering the core language features of C++, this book teaches the skills needed to write truly high quality software. These include topics such as unit tests, debugging, design patterns and data structures. The book teaches everything you need to know to solve realistic financial problems in C++. It can be used for self-study or as a textbook for an advanced undergraduate or master's level course.
How to Divide When There Isn't Enough develops a rigorous yet accessible presentation of the state-of-the-art for the adjudication of conflicting claims and the theory of taxation. It covers all aspects one may wish to know about claims problems: the most important rules, the most important axioms, and how these two sets are related. More generally, it also serves as an introduction to the modern theory of economic design, which in the last twenty years has revolutionized many areas of economics, generating a wide range of applicable allocations rules that have improved people's lives in many ways. In developing the theory, the book employs a variety of techniques that will appeal to both experts and non-experts. Compiling decades of research into a single framework, William Thomson provides numerous applications that will open a large number of avenues for future research.
In recent years, interest in rigorous impact evaluation has grown tremendously in policy-making, economics, public health, social sciences and international relations. Evidence-based policy-making has become a recurring theme in public policy, alongside greater demands for accountability in public policies and public spending, and requests for independent and rigorous impact evaluations for policy evidence. Froelich and Sperlich offer a comprehensive and up-to-date approach to quantitative impact evaluation analysis, also known as causal inference or treatment effect analysis, illustrating the main approaches for identification and estimation: experimental studies, randomization inference and randomized control trials (RCTs), matching and propensity score matching and weighting, instrumental variable estimation, difference-in-differences, regression discontinuity designs, quantile treatment effects, and evaluation of dynamic treatments. The book is designed for economics graduate courses but can also serve as a manual for professionals in research institutes, governments, and international organizations, evaluating the impact of a wide range of public policies in health, environment, transport and economic development.
Focusing on Bayesian approaches and computations using analytic and simulation-based methods for inference, Time Series: Modeling, Computation, and Inference, Second Edition integrates mainstream approaches for time series modeling with significant recent developments in methodology and applications of time series analysis. It encompasses a graduate-level account of Bayesian time series modeling, analysis and forecasting, a broad range of references to state-of-the-art approaches to univariate and multivariate time series analysis, and contacts research frontiers in multivariate time series modeling and forecasting. It presents overviews of several classes of models and related methodology for inference, statistical computation for model fitting and assessment, and forecasting. It explores the connections between time- and frequency-domain approaches and develop various models and analyses using Bayesian formulations and computation, including use of computations based on Markov chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) methods. It illustrates the models and methods with examples and case studies from a variety of fields, including signal processing, biomedicine, environmental science, and finance. Along with core models and methods, the book represents state-of-the art approaches to analysis and forecasting in challenging time series problems. It also demonstrates the growth of time series analysis into new application areas in recent years, and contacts recent and relevant modeling developments and research challenges. New in the second edition: Expanded on aspects of core model theory and methodology. Multiple new examples and exercises. Detailed development of dynamic factor models. Updated discussion and connections with recent and current research frontiers.
Modern economies are full of uncertainties and risk. Economics studies resource allocations in an uncertain market environment. As a generally applicable quantitative analytic tool for uncertain events, probability and statistics have been playing an important role in economic research. Econometrics is statistical analysis of economic and financial data. In the past four decades or so, economics has witnessed a so-called 'empirical revolution' in its research paradigm, and as the main methodology in empirical studies in economics, econometrics has been playing an important role. It has become an indispensable part of training in modern economics, business and management.This book develops a coherent set of econometric theory, methods and tools for economic models. It is written as a textbook for graduate students in economics, business, management, statistics, applied mathematics, and related fields. It can also be used as a reference book on econometric theory by scholars who may be interested in both theoretical and applied econometrics.
Packed with insights, Lorenzo Bergomi's Stochastic Volatility Modeling explains how stochastic volatility is used to address issues arising in the modeling of derivatives, including: Which trading issues do we tackle with stochastic volatility? How do we design models and assess their relevance? How do we tell which models are usable and when does calibration make sense? This manual covers the practicalities of modeling local volatility, stochastic volatility, local-stochastic volatility, and multi-asset stochastic volatility. In the course of this exploration, the author, Risk's 2009 Quant of the Year and a leading contributor to volatility modeling, draws on his experience as head quant in Societe Generale's equity derivatives division. Clear and straightforward, the book takes readers through various modeling challenges, all originating in actual trading/hedging issues, with a focus on the practical consequences of modeling choices.
Quantile regression constitutes an ensemble of statistical techniques intended to estimate and draw inferences about conditional quantile functions. Median regression, as introduced in the 18th century by Boscovich and Laplace, is a special case. In contrast to conventional mean regression that minimizes sums of squared residuals, median regression minimizes sums of absolute residuals; quantile regression simply replaces symmetric absolute loss by asymmetric linear loss. Since its introduction in the 1970's by Koenker and Bassett, quantile regression has been gradually extended to a wide variety of data analytic settings including time series, survival analysis, and longitudinal data. By focusing attention on local slices of the conditional distribution of response variables it is capable of providing a more complete, more nuanced view of heterogeneous covariate effects. Applications of quantile regression can now be found throughout the sciences, including astrophysics, chemistry, ecology, economics, finance, genomics, medicine, and meteorology. Software for quantile regression is now widely available in all the major statistical computing environments. The objective of this volume is to provide a comprehensive review of recent developments of quantile regression methodology illustrating its applicability in a wide range of scientific settings. The intended audience of the volume is researchers and graduate students across a diverse set of disciplines.
Algorithmic Trading and Quantitative Strategies provides an in-depth overview of this growing field with a unique mix of quantitative rigor and practitioner's hands-on experience. The focus on empirical modeling and practical know-how makes this book a valuable resource for students and professionals. The book starts with the often overlooked context of why and how we trade via a detailed introduction to market structure and quantitative microstructure models. The authors then present the necessary quantitative toolbox including more advanced machine learning models needed to successfully operate in the field. They next discuss the subject of quantitative trading, alpha generation, active portfolio management and more recent topics like news and sentiment analytics. The last main topic of execution algorithms is covered in detail with emphasis on the state of the field and critical topics including the elusive concept of market impact. The book concludes with a discussion of the technology infrastructure necessary to implement algorithmic strategies in large-scale production settings. A GitHub repository includes data sets and explanatory/exercise Jupyter notebooks. The exercises involve adding the correct code to solve the particular analysis/problem.
Computational finance is increasingly important in the financial industry, as a necessary instrument for applying theoretical models to real-world challenges. Indeed, many models used in practice involve complex mathematical problems, for which an exact or a closed-form solution is not available. Consequently, we need to rely on computational techniques and specific numerical algorithms. This book combines theoretical concepts with practical implementation. Furthermore, the numerical solution of models is exploited, both to enhance the understanding of some mathematical and statistical notions, and to acquire sound programming skills in MATLAB (R), which is useful for several other programming languages also. The material assumes the reader has a relatively limited knowledge of mathematics, probability, and statistics. Hence, the book contains a short description of the fundamental tools needed to address the two main fields of quantitative finance: portfolio selection and derivatives pricing. Both fields are developed here, with a particular emphasis on portfolio selection, where the author includes an overview of recent approaches. The book gradually takes the reader from a basic to medium level of expertise by using examples and exercises to simplify the understanding of complex models in finance, giving them the ability to place financial models in a computational setting. The book is ideal for courses focusing on quantitative finance, asset management, mathematical methods for economics and finance, investment banking, and corporate finance.
A unique and comprehensive source of information, this book is the only international publication providing economists, planners, policymakers and business people with worldwide statistics on current performance and trends in the manufacturing sector. The Yearbook is designed to facilitate international comparisons relating to manufacturing activity and industrial development and performance. It provides data which can be used to analyse patterns of growth and related long term trends, structural change, and industrial performance in individual industries. Statistics on employment patterns, wages, consumption and gross output and other key indicators are also presented. |
You may like...
The Leading Indicators - A Short History…
Zachary Karabell
Paperback
Operations And Supply Chain Management
David Collier, James Evans
Hardcover
Quantitative statistical techniques
Swanepoel Swanepoel, Vivier Vivier, …
Paperback
(2)R751 Discovery Miles 7 510
|