![]() |
![]() |
Your cart is empty |
||
Books > Business & Economics > Economics > Econometrics
Predicting foreign exchange rates has presented a long-standing challenge for economists. However, the recent advances in computational techniques, statistical methods, newer datasets on emerging market currencies, etc., offer some hope. While we are still unable to beat a driftless random walk model, there has been serious progress in the field. This book provides an in-depth assessment of the use of novel statistical approaches and machine learning tools in predicting foreign exchange rate movement. First, it offers a historical account of how exchange rate regimes have evolved over time, which is critical to understanding turning points in a historical time series. It then presents an overview of the previous attempts at modeling exchange rates, and how different methods fared during this process. At the core sections of the book, the author examines the time series characteristics of exchange rates and how contemporary statistics and machine learning can be useful in improving predictive power, compared to previous methods used. Exchange rate determination is an active research area, and this book will appeal to graduate-level students of international economics, international finance, open economy macroeconomics, and management. The book is written in a clear, engaging, and straightforward way, and will greatly improve access to this much-needed knowledge in the field.
For one-semester courses in Introduction to Business Statistics. The gold standard in learning Microsoft Excelfor business statistics Statistics for Managers Using Microsoft (R) Excel (R), 9th Edition, Global Edition helps students develop the knowledge of Excel needed in future careers. The authors present statistics in the context of specific business fields, and now include a full chapter on business analytics. Guided by principles set forth by ASA's Guidelines for Assessment and Instruction (GAISE) reports and the authors' diverse teaching experiences, the text continues to innovate and improve the way this course is taught to students. Current data throughout gives students valuable practice analysing the types of data they will see in their professions, and the authors' friendly writing style includes tips and learning aids throughout.
In The Online Customer, Yinghui Yang details how data mining and marketing approaches can be used to study marketing problems. The book uses a vast dataset of web transactions from the largest internet retailers, including Amazon.com. In particular, she deftly shows how to integrate and compare statistical methods from marketing and data mining research. The book comprises two parts. The first part focuses on using behavior patterns for customer segmentation. It advances data mining theory by presenting a novel pattern-based clustering approach to customer segmentation and valuation. The second part of the book explores how free shipping impacts purchase behavior online. It illuminates the importance of shipping policies in a competitive setting. With complete documentation and methodology, this book is a valuable reference that business and Internet Studies scholars can build upon.
Praise for the first edition: [This book] reflects the extensive experience and significant contributions of the author to non-linear and non-Gaussian modeling. ... [It] is a valuable book, especially with its broad and accessible introduction of models in the state-space framework. -Statistics in Medicine What distinguishes this book from comparable introductory texts is the use of state-space modeling. Along with this come a number of valuable tools for recursive filtering and smoothing, including the Kalman filter, as well as non-Gaussian and sequential Monte Carlo filters. -MAA Reviews Introduction to Time Series Modeling with Applications in R, Second Edition covers numerous stationary and nonstationary time series models and tools for estimating and utilizing them. The goal of this book is to enable readers to build their own models to understand, predict and master time series. The second edition makes it possible for readers to reproduce examples in this book by using the freely available R package TSSS to perform computations for their own real-world time series problems. This book employs the state-space model as a generic tool for time series modeling and presents the Kalman filter, the non-Gaussian filter and the particle filter as convenient tools for recursive estimation for state-space models. Further, it also takes a unified approach based on the entropy maximization principle and employs various methods of parameter estimation and model selection, including the least squares method, the maximum likelihood method, recursive estimation for state-space models and model selection by AIC. Along with the standard stationary time series models, such as the AR and ARMA models, the book also introduces nonstationary time series models such as the locally stationary AR model, the trend model, the seasonal adjustment model, the time-varying coefficient AR model and nonlinear non-Gaussian state-space models. About the Author: Genshiro Kitagawa is a project professor at the University of Tokyo, the former Director-General of the Institute of Statistical Mathematics, and the former President of the Research Organization of Information and Systems.
Doing Statistical Analysis looks at three kinds of statistical research questions - descriptive, associational, and inferential - and shows students how to conduct statistical analyses and interpret the results. Keeping equations to a minimum, it uses a conversational style and relatable examples such as football, COVID-19, and tourism, to aid understanding. Each chapter contains practice exercises, and a section showing students how to reproduce the statistical results in the book using Stata and SPSS. Digital supplements consist of data sets in Stata, SPSS, and Excel, and a test bank for instructors. Its accessible approach means this is the ideal textbook for undergraduate students across the social and behavioral sciences needing to build their confidence with statistical analysis.
"Econometrics textbooks see their subject as a set of techniques; Magnus and Morgan see it as a set of practices. A combination of controlled experiment and anthropology of science, Methodology and Tacit Knowledge gives a rare inside view of how econometricians work, why econometrics is an art and not a set of simple recipes, and why, like all artists, econometricians differ in their techniques and finished works. This is economic methodology at its best." Kevin Hoover, University of California, Davis "The tacit knowledge experiment was a highly commendable initiative. Its exploration of the theme of how knowledge is acquired and used in applied econometrics is unique and produced some fascinating insights into this process." Adrian Pagan, Australian National University "It is rare, perhaps unique, to find leading empirical economists face the prospect of modelling the same phenomena, with the same data within the same limited time frame. A valuable and illuminating experiment in comparative research methodologies, made all the more provocative when compared to the excellent original study by Tobin." Richard Blundell, University College London This book will be of considerable interest to economists and to econometricians concerned about the methodology of their own discipline, and will provide valuable material for researchers in science studies and for teachers of econometrics.
The growth rate of national income has fluctuated widely in the United States since 1929. In this volume, Edward F. Denison uses the growth accounting methodology he pioneered and refined in earlier studies to track changes in the trend of output and its determinants. At every step he systematically distinguishes changes in the economy's ability to produce as measured by his series on potential national income from changes in the ratio of actual output to potential output. Using data for earlier years as a backdrop, Denison focuses on the dramatic decline in the growth of potential national income that started in 1974 and was further accentuated beginning in 1980, and on the pronounced decline from business cycle to business cycle in the average ratio of actual to potential output, a slide under way since 1969. The decline in growth rates has been especially pronounced in national income per person employed and other productivity measures as growth of total output has slowed despite a sharp acceleration in growth of employment and total hours at work. Denison organizes his discussion around eight table that divide 1929-82 into three long periods (the last, 1973-82) and seven shorter periods (the most recent, 1973-79 and 1979-82). These tables provide estimates of the sources of growth for eight output measures in each period. Denison stresses that the 1973-82 period of slow growth in unfinished. He observes no improvement in the productivity trend, only a weak cyclical recovery from a 1982 low. Sources-of-growth tables isolate the contributions made to growth between "input" and "output per unit of input." Even so, it is not possible to quantify separately the contribution of all determinants, and Denison evaluates qualitatively the effects of other developments on the productivity slowdown.
The essays in this special volume survey some of the most recent advances in the global analysis of dynamic models for economics, finance and the social sciences. They deal in particular with a range of topics from mathematical methods as well as numerous applications including recent developments on asset pricing, heterogeneous beliefs, global bifurcations in complementarity games, international subsidy games and issues in economic geography. A number of stochastic dynamic models are also analysed. The book is a collection of essays in honour of the 60th birthday of Laura Gardini.
Ranking of Multivariate Populations: A Permutation Approach with Applications presents a novel permutation-based nonparametric approach for ranking several multivariate populations. Using data collected from both experimental and observation studies, it covers some of the most useful designs widely applied in research and industry investigations, such as multivariate analysis of variance (MANOVA) and multivariate randomized complete block (MRCB) designs. The first section of the book introduces the topic of ranking multivariate populations by presenting the main theoretical ideas and an in-depth literature review. The second section discusses a large number of real case studies from four specific research areas: new product development in industry, perceived quality of the indoor environment, customer satisfaction, and cytological and histological analysis by image processing. A web-based nonparametric combination global ranking software is also described. Designed for practitioners and postgraduate students in statistics and the applied sciences, this application-oriented book offers a practical guide to the reliable global ranking of multivariate items, such as products, processes, and services, in terms of the performance of all investigated products/prototypes.
This book addresses the functioning of financial markets, in particular the financial market model, and modelling. More specifically, the book provides a model of adaptive preference in the financial market, rather than the model of the adaptive financial market, which is mostly based on Popper's objective propensity for the singular, i.e., unrepeatable, event. As a result, the concept of preference, following Simon's theory of satisficing, is developed in a logical way with the goal of supplying a foundation for a robust theory of adaptive preference in financial market behavior. The book offers new insights into financial market logic, and psychology: 1) advocating for the priority of behavior over information - in opposition to traditional financial market theories; 2) constructing the processes of (co)evolution adaptive preference-financial market using the concept of fetal reaction norms - between financial market and adaptive preference; 3) presenting a new typology of information in the financial market, aimed at proving point (1) above, as well as edifying an explicative mechanism of the evolutionary nature and behavior of the (real) financial market; 4) presenting sufficient, and necessary, principles or assumptions for developing a theory of adaptive preference in the financial market; and 5) proposing a new interpretation of the pair genotype-phenotype in the financial market model. The book's distinguishing feature is its research method, which is mainly logically rather than historically or empirically based. As a result, the book is targeted at generating debate about the best and most scientifically beneficial method of approaching, analyzing, and modelling financial markets.
This book addresses one of the most important research activities in empirical macroeconomics. It provides a course of advanced but intuitive methods and tools enabling the spatial and temporal disaggregation of basic macroeconomic variables and the assessment of the statistical uncertainty of the outcomes of disaggregation. The empirical analysis focuses mainly on GDP and its growth in the context of Poland. However, all of the methods discussed can be easily applied to other countries. The approach used in the book views spatial and temporal disaggregation as a special case of the estimation of missing observations (a topic on missing data analysis). The book presents an econometric course of models of Seemingly Unrelated Regression Equations (SURE). The main advantage of using the SURE specification is to tackle the presented research problem so that it allows for the heterogeneity of the parameters describing relations between macroeconomic indicators. The book contains model specification, as well as descriptions of stochastic assumptions and resulting procedures of estimation and testing. The method also addresses uncertainty in the estimates produced. All of the necessary tests and assumptions are presented in detail. The results are designed to serve as a source of invaluable information making regional analyses more convenient and - more importantly - comparable. It will create a solid basis for making conclusions and recommendations concerning regional economic policy in Poland, particularly regarding the assessment of the economic situation. This is essential reading for academics, researchers, and economists with regional analysis as their field of expertise, as well as central bankers and policymakers.
The Who, What, and Where of America is designed to provide a sampling of key demographic information. It covers the United States, every state, each metropolitan statistical area, and all the counties and cities with a population of 20,000 or more. Who: Age, Race and Ethnicity, and Household Structure What: Education, Employment, and Income Where: Migration, Housing, and Transportation Each part is preceded by highlights and ranking tables that show how areas diverge from the national norm. These research aids are invaluable for understanding data from the ACS and for highlighting what it tells us about who we are, what we do, and where we live. Each topic is divided into four tables revealing the results of the data collected from different types of geographic areas in the United States, generally with populations greater than 20,000. Table A. States Table B. Counties Table C. Metropolitan Areas Table D. Cities In this edition, you will find social and economic estimates on the ways American communities are changing with regard to the following: Age and race Health care coverage Marital history Education attainment Income and occupation Commute time to work Employment status Home values and monthly costs Veteran status Size of home or rental unit This title is the latest in the County and City Extra Series of publications from Bernan Press. Other titles include County and City Extra, County and City Extra: Special Decennial Census Edition, and Places, Towns, and Townships.
It is well-known that modern stochastic calculus has been exhaustively developed under usual conditions. Despite such a well-developed theory, there is evidence to suggest that these very convenient technical conditions cannot necessarily be fulfilled in real-world applications. Optional Processes: Theory and Applications seeks to delve into the existing theory, new developments and applications of optional processes on "unusual" probability spaces. The development of stochastic calculus of optional processes marks the beginning of a new and more general form of stochastic analysis. This book aims to provide an accessible, comprehensive and up-to-date exposition of optional processes and their numerous properties. Furthermore, the book presents not only current theory of optional processes, but it also contains a spectrum of applications to stochastic differential equations, filtering theory and mathematical finance. Features Suitable for graduate students and researchers in mathematical finance, actuarial science, applied mathematics and related areas Compiles almost all essential results on the calculus of optional processes in unusual probability spaces Contains many advanced analytical results for stochastic differential equations and statistics pertaining to the calculus of optional processes Develops new methods in finance based on optional processes such as a new portfolio theory, defaultable claim pricing mechanism, etc.
This book aims to bring together studies using different data types (panel data, cross-sectional data and time series data) and different methods (for example, panel regression, nonlinear time series, chaos approach, deep learning, machine learning techniques among others) and to create a source for those interested in these topics and methods by addressing some selected applied econometrics topics which have been developed in recent years. It creates a common meeting ground for scientists who give econometrics education in Turkey to study, and contribute to the delivery of the authors' knowledge to the people who take interest. This book can also be useful for "Applied Economics and Econometrics" courses in postgraduate education as a material source
Today econometrics has been widely applied in the empirical study of economics. As an empirical science, econometrics uses rigorous mathematical and statistical methods for economic problems. Understanding the methodologies of both econometrics and statistics is a crucial departure for econometrics. The primary focus of this book is to provide an understanding of statistical properties behind econometric methods. Following the introduction in Chapter 1, Chapter 2 provides the methodological review of both econometrics and statistics in different periods since the 1930s. Chapters 3 and 4 explain the underlying theoretical methodologies for estimated equations in the simple regression and multiple regression models and discuss the debates about p-values in particular. This part of the book offers the reader a richer understanding of the methods of statistics behind the methodology of econometrics. Chapters 5-9 of the book are focused on the discussion of regression models using time series data, traditional causal econometric models, and the latest statistical techniques. By concentrating on dynamic structural linear models like state-space models and the Bayesian approach, the book alludes to the fact that this methodological study is not only a science but also an art. This work serves as a handy reference book for anyone interested in econometrics, particularly in relevance to students and academic and business researchers in all quantitative analysis fields.
The volume contains articles that should appeal to readers with computational, modeling, theoretical, and applied interests. Methodological issues include parallel computation, Hamiltonian Monte Carlo, dynamic model selection, small sample comparison of structural models, Bayesian thresholding methods in hierarchical graphical models, adaptive reversible jump MCMC, LASSO estimators, parameter expansion algorithms, the implementation of parameter and non-parameter-based approaches to variable selection, a survey of key results in objective Bayesian model selection methodology, and a careful look at the modeling of endogeneity in discrete data settings. Important contemporary questions are examined in applications in macroeconomics, finance, banking, labor economics, industrial organization, and transportation, among others, in which model uncertainty is a central consideration.
Using data from the World Values Survey, this book sheds light on the link between happiness and the social group to which one belongs. The work is based on a rigorous statistical analysis of differences in the probability of happiness and life satisfaction between the predominant social group and subordinate groups. The cases of India and South Africa receive deep attention in dedicated chapters on cast and race, with other chapters considering issues such as cultural bias, religion, patriarchy, and gender. An additional chapter offers a global perspective. On top of this, the longitudinal nature of the data facilitates an examination of how world happiness has evolved between 1994 and 2014. This book will be a valuable reference for advanced students, scholars and policymakers involved in development economics, well-being, development geography, and sociology.
These essays honor Professor Peter C.B. Phillips of Yale University and his many contributions to the field of econometrics. Professor Phillips's research spans many topics in econometrics including: non-stationary time series and panel models partial identification and weak instruments Bayesian model evaluation and prediction financial econometrics and finite-sample statistical methods and results. The papers in this volume reflect additions to and amplifications of many of Professor Phillips' research contributions. Some of the topics discussed in the volume include panel macro-econometric modeling, efficient estimation and inference in difference-in-difference models, limiting and empirical distributions of IV estimates when some of the instruments are endogenous, the use of stochastic dominance techniques to examine conditional wage distributions of incumbents and newly hired employees, long-horizon predictive tests in financial markets, new developments in information matrix testing, testing for co-integration in Markov switching error correction models, and deviation information criteria for comparing vector autoregressive models.
The interaction between mathematicians and statisticians reveals to be an effective approach to the analysis of insurance and financial problems, in particular in an operative perspective. The Maf2006 conference, held at the University of Salerno in 2006, had precisely this purpose and the collection published here gathers some of the papers presented at the conference and successively worked out to this aim. They cover a wide variety of subjects in insurance and financial fields.
With the rapidly advancing fields of Data Analytics and Computational Statistics, it's important to keep up with current trends, methodologies, and applications. This book investigates the role of data mining in computational statistics for machine learning. It offers applications that can be used in various domains and examines the role of transformation functions in optimizing problem statements. Data Analytics, Computational Statistics, and Operations Research for Engineers: Methodologies and Applications presents applications of computationally intensive methods, inference techniques, and survival analysis models. It discusses how data mining extracts information and how machine learning improves the computational model based on the new information. Those interested in this reference work will include students, professionals, and researchers working in the areas of data mining, computational statistics, operations research, and machine learning.
This 30th volume of the International Symposia in Economic Theory and Econometrics explores the latest social and financial developments across Asian markets. Chapters cover a range of topics such as the impact of COVID-19 related events in Southeast Asia along the determinants of capital structure before and during the pandemic; the influence of new distribution concepts on macro and micro economic levels; as well as the effects of long-term cross-currency basis swaps on government bonds. These peer-reviewed papers touch on a variety of timely, interdisciplinary subjects such as real earnings impact and the effects of public policy. Together, Quantitative Analysis of Social and Financial Market Development is a crucial resource of current, cutting-edge research for any scholar of international finance and economics.
Develop the analytical skills that are in high demand in businesses today with Camm/Cochran/Fry/Ohlmann's best-selling BUSINESS ANALYTICS, 4E. You master the full range of analytics as you strengthen descriptive, predictive and prescriptive analytic skills. Real examples and memorable visuals illustrate data and results for each topic. Step-by-step instructions guide you through using Microsoft (R) Excel, Tableau, R, and JMP Pro software to perform even advanced analytics concepts. Practical, relevant problems at all levels of difficulty further help you apply what you've learned. This edition assists you in becoming proficient in topics beyond the traditional quantitative concepts, such as data visualization and data mining, which are increasingly important in today's analytical problem solving. MindTap digital learning resources with an interactive eBook, algorithmic practice problems with solutions and Exploring Analytics visualizations strengthen your understanding of key concepts.
Now in its third edition, Essential Econometric Techniques: A Guide to Concepts and Applications is a concise, student-friendly textbook which provides an introductory grounding in econometrics, with an emphasis on the proper application and interpretation of results. Drawing on the author's extensive teaching experience, this book offers intuitive explanations of concepts such as heteroskedasticity and serial correlation, and provides step-by-step overviews of each key topic. This new edition contains more applications, brings in new material including a dedicated chapter on panel data techniques, and moves the theoretical proofs to appendices. After Chapter 7, students will be able to design and conduct rudimentary econometric research. The next chapters cover multicollinearity, heteroskedasticity, and autocorrelation, followed by techniques for time-series analysis and panel data. Excel data sets for the end-of-chapter problems are available as a digital supplement. A solutions manual is also available for instructors, as well as PowerPoint slides for each chapter. Essential Econometric Techniques shows students how economic hypotheses can be questioned and tested using real-world data, and is the ideal supplementary text for all introductory econometrics courses. |
![]() ![]() You may like...
|