![]() |
![]() |
Your cart is empty |
||
Books > Business & Economics > Economics > Econometrics > Economic statistics
Companion Website materials: https://tzkeith.com/ Multiple Regression and Beyond offers a conceptually-oriented introduction to multiple regression (MR) analysis and structural equation modeling (SEM), along with analyses that flow naturally from those methods. By focusing on the concepts and purposes of MR and related methods, rather than the derivation and calculation of formulae, this book introduces material to students more clearly, and in a less threatening way. In addition to illuminating content necessary for coursework, the accessibility of this approach means students are more likely to be able to conduct research using MR or SEM--and more likely to use the methods wisely. This book: * Covers both MR and SEM, while explaining their relevance to one another * Includes path analysis, confirmatory factor analysis, and latent growth modeling * Makes extensive use of real-world research examples in the chapters and in the end-of-chapter exercises * Extensive use of figures and tables providing examples and illustrating key concepts and techniques New to this edition: * New chapter on mediation, moderation, and common cause * New chapter on the analysis of interactions with latent variables and multilevel SEM * Expanded coverage of advanced SEM techniques in chapters 18 through 22 * International case studies and examples * Updated instructor and student online resources
World Statistics on Mining and Utilities 2018 provides a unique biennial overview of the role of mining and utility activities in the world economy. This extensive resource from UNIDO provides detailed time series data on the level, structure and growth of international mining and utility activities by country and sector. Country level data is clearly presented on the number of establishments, employment and output of activities such as: coal, iron ore and crude petroleum mining as well as production and supply of electricity, natural gas and water. This unique and comprehensive source of information meets the growing demand of data users who require detailed and reliable statistical information on the primary industry and energy producing sectors. The publication provides internationally comparable data to economic researchers, development strategists and business communities who influence the policy of industrial development and its environmental sustainability.
Advanced Statistics for Kinesiology and Exercise Science is the first textbook to cover advanced statistical methods in the context of the study of human performance. Divided into three distinct sections, the book introduces and explores in depth both analysis of variance (ANOVA) and regressions analyses, including chapters on: preparing data for analysis; one-way, factorial, and repeated-measures ANOVA; analysis of covariance and multiple analyses of variance and covariance; diagnostic tests; regression models for quantitative and qualitative data; model selection and validation; logistic regression Drawing clear lines between the use of IBM SPSS Statistics software and interpreting and analyzing results, and illustrated with sport and exercise science-specific sample data and results sections throughout, the book offers an unparalleled level of detail in explaining advanced statistical techniques to kinesiology students. Advanced Statistics for Kinesiology and Exercise Science is an essential text for any student studying advanced statistics or research methods as part of an undergraduate or postgraduate degree programme in kinesiology, sport and exercise science, or health science.
Prepares readers to analyze data and interpret statistical results using the increasingly popular R more quickly than other texts through LessR extensions which remove the need to program. By introducing R through less R, readers learn how to organize data for analysis, read the data into R, and produce output without performing numerous functions and programming first. Readers can select the necessary procedure and change the relevant variables without programming. Quick Starts introduce readers to the concepts and commands reviewed in the chapters. Margin notes define, illustrate, and cross-reference the key concepts. When readers encounter a term previously discussed, the margin notes identify the page number to the initial introduction. Scenarios highlight the use of a specific analysis followed by the corresponding R/lessR input and an interpretation of the resulting output. Numerous examples of output from psychology, business, education, and other social sciences demonstrate how to interpret results and worked problems help readers test their understanding. www.lessRstats.com website features the lessR program, the book's 2 data sets referenced in standard text and SPSS formats so readers can practice using R/lessR by working through the text examples and worked problems, PDF slides for each chapter, solutions to the book's worked problems, links to R/lessR videos to help readers better understand the program, and more. New to this edition: o upgraded functionality and data visualizations of the lessR package, which is now aesthetically equal to the ggplot 2 R standard o new features to replace and extend previous content, such as aggregating data with pivot tables with a simple lessR function call.
Modern marketing managers need intuitive and effective tools not just for designing strategies but also for general management. This hands-on book introduces a range of contemporary management and marketing tools and concepts with a focus on forecasting, creating stimulating processes, and implementation. Topics addressed range from creating a clear vision, setting goals, and developing strategies, to implementing strategic analysis tools, consumer value models, budgeting, strategic and operational marketing plans. Special attention is paid to change management and digital transformation in the marketing landscape. Given its approach and content, the book offers a valuable asset for all professionals and advanced MBA students looking for 'real-life' tools and applications.
Originating in economics but now used in a variety of disciplines, including medicine, epidemiology and the social sciences, this book provides accessible coverage of the theoretical foundations of the Logit model as well as its applications to concrete problems. It is written not only for economists but for researchers working in disciplines where it is necessary to model qualitative random variables. J.S. Cramer has also provided data sets on which to practice Logit analysis.
This book is based on two Sir Richard Stone lectures at the Bank of England and the National Institute for Economic and Social Research. Largely non-technical, the first part of the book covers some of the broader issues involved in Stone's and others' work in statistics. It explores the more philosophical issues attached to statistics, econometrics and forecasting and describes the paradigm shift back to the Bayesian approach to scientific inference. The first part concludes with simple examples from the different worlds of educational management and golf clubs. The second, more technical part covers in detail the structural econometric time series analysis (SEMTSA) approach to statistical and econometric modeling.
Introduction to Financial Mathematics: Option Valuation, Second Edition is a well-rounded primer to the mathematics and models used in the valuation of financial derivatives. The book consists of fifteen chapters, the first ten of which develop option valuation techniques in discrete time, the last five describing the theory in continuous time. The first half of the textbook develops basic finance and probability. The author then treats the binomial model as the primary example of discrete-time option valuation. The final part of the textbook examines the Black-Scholes model. The book is written to provide a straightforward account of the principles of option pricing and examines these principles in detail using standard discrete and stochastic calculus models. Additionally, the second edition has new exercises and examples, and includes many tables and graphs generated by over 30 MS Excel VBA modules available on the author's webpage https://home.gwu.edu/~hdj/.
Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of stochastic processes with continuous and discontinuous paths. It also covers a wide selection of popular models in finance and insurance, from Black-Scholes to stochastic volatility to interest rate to dynamic mortality. Through its many numerical and graphical illustrations and simple, insightful examples, this book provides a deep understanding of the scope of Monte Carlo methods and their use in various financial situations. The intuitive presentation encourages readers to implement and further develop the simulation methods.
Originally published in 1978. This book is designed to enable students on main courses in economics to comprehend literature which employs econometric techniques as a method of analysis, to use econometric techniques themselves to test hypotheses about economic relationships and to understand some of the difficulties involved in interpreting results. While the book is mainly aimed at second-year undergraduates undertaking courses in applied economics, its scope is sufficiently wide to take in students at postgraduate level who have no background in econometrics - it integrates fully the mathematical and statistical techniques used in econometrics with micro- and macroeconomic case studies.
There is no shortage of incentives to study and reduce poverty in our societies. Poverty is studied in economics and political sciences, and population surveys are an important source of information about it. The design and analysis of such surveys is principally a statistical subject matter and the computer is essential for their data compilation and processing. Focusing on The European Union Statistics on Income and Living Conditions (EU-SILC), a program of annual national surveys which collect data related to poverty and social exclusion, Statistical Studies of Income, Poverty and Inequality in Europe: Computing and Graphics in R presents a set of statistical analyses pertinent to the general goals of EU-SILC. The contents of the volume are biased toward computing and statistics, with reduced attention to economics, political and other social sciences. The emphasis is on methods and procedures as opposed to results, because the data from annual surveys made available since publication and in the near future will degrade the novelty of the data used and the results derived in this volume. The aim of this volume is not to propose specific methods of analysis, but to open up the analytical agenda and address the aspects of the key definitions in the subject of poverty assessment that entail nontrivial elements of arbitrariness. The presented methods do not exhaust the range of analyses suitable for EU-SILC, but will stimulate the search for new methods and adaptation of established methods that cater to the identified purposes.
Originally published in 1970; with a second edition in 1989. Empirical Bayes methods use some of the apparatus of the pure Bayes approach, but an actual prior distribution is assumed to generate the data sequence. It can be estimated thus producing empirical Bayes estimates or decision rules. In this second edition, details are provided of the derivation and the performance of empirical Bayes rules for a variety of special models. Attention is given to the problem of assessing the goodness of an empirical Bayes estimator for a given set of prior data. Chapters also focus on alternatives to the empirical Bayes approach and actual applications of empirical Bayes methods.
Originally published in 1929. This balanced combination of fieldwork, statistical measurement, and realistic applications shows a synthesis of economics and political science in a conception of an organic relationship between the two sciences that involves functional analysis, institutional interpretation, and a more workmanlike approach to questions of organization such as division of labour and the control of industry. The treatise applies the test of fact through statistical analysis to economic and political theories for the quantitative and institutional approach in solving social and industrial problems. It constructs a framework of concepts, combining both economic and political theory, to systematically produce an original statement in general terms of the principles and methods for statistical fieldwork. The separation into Parts allows selective reading for the methods of statistical measurement; the principles and fallacies of applying these measures to economic and political fields; and the resultant construction of a statistical economics and politics. Basic statistical concepts are described for application, with each method of statistical measurement illustrated with instances relevant to the economic and political theory discussed and a statistical glossary is included.
In order to make informed decisions, there are three important elements: intuition, trust, and analytics. Intuition is based on experiential learning and recent research has shown that those who rely on their "gut feelings" may do better than those who don't. Analytics, however, are important in a data-driven environment to also inform decision making. The third element, trust, is critical for knowledge sharing to take place. These three elements-intuition, analytics, and trust-make a perfect combination for decision making. This book gathers leading researchers who explore the role of these three elements in the process of decision-making.
Originally published in 1985. Mathematical methods and models to facilitate the understanding of the processes of economic dynamics and prediction were refined considerably over the period before this book was written. The field had grown; and many of the techniques involved became extremely complicated. Areas of particular interest include optimal control, non-linear models, game-theoretic approaches, demand analysis and time-series forecasting. This book presents a critical appraisal of developments and identifies potentially productive new directions for research. It synthesises work from mathematics, statistics and economics and includes a thorough analysis of the relationship between system understanding and predictability.
Originally published in 1960 and 1966. This is an elementary introduction to the sources of economic statistics and their uses in answering economic questions. No mathematical knowledge is assumed, and no mathematical symbols are used. The book shows - by asking and answering a number of typical questions of applied economics - what the most useful statistics are, where they are found, and how they are to be interpreted and presented. The reader is introduced to the major British, European and American official sources, to the social accounts, to index numbers and averaging, and to elementary aids to inspection such as moving averages and scatter diagrams.
The rapidly increasing importance of China, India, Indonesia, Japan, South Korea and Taiwan both in Asia and in the world economy, represents a trend that is set to continue into the 21st century. This book provides an authoritative assessment of the 20th century performance of these countries, and in particular the factors contributing to the acceleration of Asian growth in the latter part of the century. The contributors look at Asia within a global perspective and detailed comparisons are drawn with Australia and the USA. Contributions from leading experts offer a comprehensive review of the procedures necessary to establish valid international comparisons for countries with very different economic histories and levels of development. These include methods of growth performance measurement and techniques of growth accounting. The Asian Economies in the Twentieth Century will be an indispensable new tool for policy analysts, international agencies and academic researchers.
-Up-to-date with cutting edge topics -Suitable for professional quants and as library reference for students of finance and financial mathematics
This important three volume set is a collection of Edgeworth's published writings in the areas of statistics and probability. There is a newly-emerging interest in probability theory as a basis for economic thought and this collection makes the writings of Edgeworth more accessible.A new introduction written by the editor covers the biographical details, a brief abstract of each of the articles and the basis of their selection is also included.
Factor Analysis and Dimension Reduction in R provides coverage, with worked examples, of a large number of dimension reduction procedures along with model performance metrics to compare them. Factor analysis in the form of principal components analysis (PCA) or principal factor analysis (PFA) is familiar to most social scientists. However, what is less familiar is understanding that factor analysis is a subset of the more general statistical family of dimension reduction methods. The social scientist's toolkit for factor analysis problems can be expanded to include the range of solutions this book presents. In addition to covering FA and PCA with orthogonal and oblique rotation, this book's coverage includes higher-order factor models, bifactor models, models based on binary and ordinal data, models based on mixed data, generalized low-rank models, cluster analysis with GLRM, models involving supplemental variables or observations, Bayesian factor analysis, regularized factor analysis, testing for unidimensionality, and prediction with factor scores. The second half of the book deals with other procedures for dimension reduction. These include coverage of kernel PCA, factor analysis with multidimensional scaling, locally linear embedding models, Laplacian eigenmaps, diffusion maps, force directed methods, t-distributed stochastic neighbor embedding, independent component analysis (ICA), dimensionality reduction via regression (DRR), non-negative matrix factorization (NNMF), Isomap, Autoencoder, uniform manifold approximation and projection (UMAP) models, neural network models, and longitudinal factor analysis models. In addition, a special chapter covers metrics for comparing model performance. Features of this book include: Numerous worked examples with replicable R code Explicit comprehensive coverage of data assumptions Adaptation of factor methods to binary, ordinal, and categorical data Residual and outlier analysis Visualization of factor results Final chapters that treat integration of factor analysis with neural network and time series methods Presented in color with R code and introduction to R and RStudio, this book will be suitable for graduate-level and optional module courses for social scientists, and on quantitative methods and multivariate statistics courses.
The case-control approach is a powerful method for investigating factors that may explain a particular event. It is extensively used in epidemiology to study disease incidence, one of the best-known examples being Bradford Hill and Doll's investigation of the possible connection between cigarette smoking and lung cancer. More recently, case-control studies have been increasingly used in other fields, including sociology and econometrics. With a particular focus on statistical analysis, this book is ideal for applied and theoretical statisticians wanting an up-to-date introduction to the field. It covers the fundamentals of case-control study design and analysis as well as more recent developments, including two-stage studies, case-only studies and methods for case-control sampling in time. The latter have important applications in large prospective cohorts which require case-control sampling designs to make efficient use of resources. More theoretical background is provided in an appendix for those new to the field.
A Handbook of Statistical Analyses Using SPSS clearly describes how to conduct a range of univariate and multivariate statistical analyses using the latest version of the Statistical Package for the Social Sciences, SPSS 11. Each chapter addresses a different type of analytical procedure applied to one or more data sets, primarily from the social and behavioral sciences areas. Each chapter also contains exercises relating to the data sets introduced, providing readers with a means to develop both their SPSS and statistical skills. Model answers to the exercises are also provided. Readers can download all of the data sets from a companion Web site furnished by the authors.
Explores the Origin of the Recent Banking Crisis and how to Preclude Future Crises Shedding new light on the recent worldwide banking debacle, The Banking Crisis Handbook presents possible remedies as to what should have been done prior, during, and after the crisis. With contributions from well-known academics and professionals, the book contains exclusive, new research that will undoubtedly assist bank executives, risk management departments, and other financial professionals to attain a clear picture of the banking crisis and prevent future banking collapses. The first part of the book explains how the crisis originated. It discusses the role of subprime mortgages, shadow banks, ineffective risk management, poor financial regulations, and hedge funds in causing the collapse of financial systems. The second section examines how the crisis affected the global market as well as individual countries and regions, such as Asia and Greece. In the final part, the book explores short- and long-term solutions, including government intervention, financial regulations, efficient bank default risk approaches, and methods to evaluate credit risk. It also looks at when government intervention in financial markets can be ethically justified. |
![]() ![]() You may like...
Operations And Supply Chain Management
David Collier, James Evans
Hardcover
Energy Use Policies and Carbon Pricing…
Arun Advani, Samuela Bassi, …
Paperback
R351
Discovery Miles 3 510
Statistics for Business & Economics…
James McClave, P Benson, …
Paperback
R2,601
Discovery Miles 26 010
Quantitative statistical techniques
Swanepoel Swanepoel, Vivier Vivier, …
Paperback
![]()
Statistics for Business and Economics…
Paul Newbold, William Carlson, …
Paperback
R2,579
Discovery Miles 25 790
|