![]() |
![]() |
Your cart is empty |
||
Books > Business & Economics > Economics > Econometrics > Economic statistics
The proliferation of financial derivatives over the past decades, options in particular, has underscored the increasing importance of derivative pricing literacy among students, researchers, and practitioners. Derivative Pricing: A Problem-Based Primer demystifies the essential derivative pricing theory by adopting a mathematically rigorous yet widely accessible pedagogical approach that will appeal to a wide variety of audience. Abandoning the traditional "black-box" approach or theorists' "pedantic" approach, this textbook provides readers with a solid understanding of the fundamental mechanism of derivative pricing methodologies and their underlying theory through a diversity of illustrative examples. The abundance of exercises and problems makes the book well-suited as a text for advanced undergraduates, beginning graduates as well as a reference for professionals and researchers who need a thorough understanding of not only "how," but also "why" derivative pricing works. It is especially ideal for students who need to prepare for the derivatives portion of the Society of Actuaries Investment and Financial Markets Exam. Features Lucid explanations of the theory and assumptions behind various derivative pricing models. Emphasis on intuitions, mnemonics as well as common fallacies. Interspersed with illustrative examples and end-of-chapter problems that aid a deep understanding of concepts in derivative pricing. Mathematical derivations, while not eschewed, are made maximally accessible. A solutions manual is available for qualified instructors. The Author Ambrose Lo is currently Assistant Professor of Actuarial Science at the Department of Statistics and Actuarial Science at the University of Iowa. He received his Ph.D. in Actuarial Science from the University of Hong Kong in 2014, with dependence structures, risk measures, and optimal reinsurance being his research interests. He is a Fellow of the Society of Actuaries (FSA) and a Chartered Enterprise Risk Analyst (CERA). His research papers have been published in top-tier actuarial journals, such as ASTIN Bulletin: The Journal of the International Actuarial Association, Insurance: Mathematics and Economics, and Scandinavian Actuarial Journal.
Oil and gas industries apply several techniques for assessing and mitigating the risks that are inherent in its operations. In this context, the application of Bayesian Networks (BNs) to risk assessment offers a different probabilistic version of causal reasoning. Introducing probabilistic nature of hazards, conditional probability and Bayesian thinking, it discusses how cause and effect of process hazards can be modelled using BNs and development of large BNs from basic building blocks. Focus is on development of BNs for typical equipment in industry including accident case studies and its usage along with other conventional risk assessment methods. Aimed at professionals in oil and gas industry, safety engineering, risk assessment, this book Brings together basics of Bayesian theory, Bayesian Networks and applications of the same to process safety hazards and risk assessment in the oil and gas industry Presents sequence of steps for setting up the model, populating the model with data and simulating the model for practical cases in a systematic manner Includes a comprehensive list on sources of failure data and tips on modelling and simulation of large and complex networks Presents modelling and simulation of loss of containment of actual equipment in oil and gas industry such as Separator, Storage tanks, Pipeline, Compressor and risk assessments Discusses case studies to demonstrate the practicability of use of Bayesian Network in routine risk assessments
This book is about learning from data using the Generalized Additive Models for Location, Scale and Shape (GAMLSS). GAMLSS extends the Generalized Linear Models (GLMs) and Generalized Additive Models (GAMs) to accommodate large complex datasets, which are increasingly prevalent. In particular, the GAMLSS statistical framework enables flexible regression and smoothing models to be fitted to the data. The GAMLSS model assumes that the response variable has any parametric (continuous, discrete or mixed) distribution which might be heavy- or light-tailed, and positively or negatively skewed. In addition, all the parameters of the distribution (location, scale, shape) can be modelled as linear or smooth functions of explanatory variables. Key Features: Provides a broad overview of flexible regression and smoothing techniques to learn from data whilst also focusing on the practical application of methodology using GAMLSS software in R. Includes a comprehensive collection of real data examples, which reflect the range of problems addressed by GAMLSS models and provide a practical illustration of the process of using flexible GAMLSS models for statistical learning. R code integrated into the text for ease of understanding and replication. Supplemented by a website with code, data and extra materials. This book aims to help readers understand how to learn from data encountered in many fields. It will be useful for practitioners and researchers who wish to understand and use the GAMLSS models to learn from data and also for students who wish to learn GAMLSS through practical examples.
Computational finance is increasingly important in the financial industry, as a necessary instrument for applying theoretical models to real-world challenges. Indeed, many models used in practice involve complex mathematical problems, for which an exact or a closed-form solution is not available. Consequently, we need to rely on computational techniques and specific numerical algorithms. This book combines theoretical concepts with practical implementation. Furthermore, the numerical solution of models is exploited, both to enhance the understanding of some mathematical and statistical notions, and to acquire sound programming skills in MATLAB (R), which is useful for several other programming languages also. The material assumes the reader has a relatively limited knowledge of mathematics, probability, and statistics. Hence, the book contains a short description of the fundamental tools needed to address the two main fields of quantitative finance: portfolio selection and derivatives pricing. Both fields are developed here, with a particular emphasis on portfolio selection, where the author includes an overview of recent approaches. The book gradually takes the reader from a basic to medium level of expertise by using examples and exercises to simplify the understanding of complex models in finance, giving them the ability to place financial models in a computational setting. The book is ideal for courses focusing on quantitative finance, asset management, mathematical methods for economics and finance, investment banking, and corporate finance.
Praise for the first edition: [This book] reflects the extensive experience and significant contributions of the author to non-linear and non-Gaussian modeling. ... [It] is a valuable book, especially with its broad and accessible introduction of models in the state-space framework. -Statistics in Medicine What distinguishes this book from comparable introductory texts is the use of state-space modeling. Along with this come a number of valuable tools for recursive filtering and smoothing, including the Kalman filter, as well as non-Gaussian and sequential Monte Carlo filters. -MAA Reviews Introduction to Time Series Modeling with Applications in R, Second Edition covers numerous stationary and nonstationary time series models and tools for estimating and utilizing them. The goal of this book is to enable readers to build their own models to understand, predict and master time series. The second edition makes it possible for readers to reproduce examples in this book by using the freely available R package TSSS to perform computations for their own real-world time series problems. This book employs the state-space model as a generic tool for time series modeling and presents the Kalman filter, the non-Gaussian filter and the particle filter as convenient tools for recursive estimation for state-space models. Further, it also takes a unified approach based on the entropy maximization principle and employs various methods of parameter estimation and model selection, including the least squares method, the maximum likelihood method, recursive estimation for state-space models and model selection by AIC. Along with the standard stationary time series models, such as the AR and ARMA models, the book also introduces nonstationary time series models such as the locally stationary AR model, the trend model, the seasonal adjustment model, the time-varying coefficient AR model and nonlinear non-Gaussian state-space models. About the Author: Genshiro Kitagawa is a project professor at the University of Tokyo, the former Director-General of the Institute of Statistical Mathematics, and the former President of the Research Organization of Information and Systems.
Model a Wide Range of Count Time Series Handbook of Discrete-Valued Time Series presents state-of-the-art methods for modeling time series of counts and incorporates frequentist and Bayesian approaches for discrete-valued spatio-temporal data and multivariate data. While the book focuses on time series of counts, some of the techniques discussed can be applied to other types of discrete-valued time series, such as binary-valued or categorical time series. Explore a Balanced Treatment of Frequentist and Bayesian Perspectives Accessible to graduate-level students who have taken an elementary class in statistical time series analysis, the book begins with the history and current methods for modeling and analyzing univariate count series. It next discusses diagnostics and applications before proceeding to binary and categorical time series. The book then provides a guide to modern methods for discrete-valued spatio-temporal data, illustrating how far modern applications have evolved from their roots. The book ends with a focus on multivariate and long-memory count series. Get Guidance from Masters in the Field Written by a cohesive group of distinguished contributors, this handbook provides a unified account of the diverse techniques available for observation- and parameter-driven models. It covers likelihood and approximate likelihood methods, estimating equations, simulation methods, and a Bayesian approach for model fitting.
Models for Dependent Time Series addresses the issues that arise and the methodology that can be applied when the dependence between time series is described and modeled. Whether you work in the economic, physical, or life sciences, the book shows you how to draw meaningful, applicable, and statistically valid conclusions from multivariate (or vector) time series data. The first four chapters discuss the two main pillars of the subject that have been developed over the last 60 years: vector autoregressive modeling and multivariate spectral analysis. These chapters provide the foundational material for the remaining chapters, which cover the construction of structural models and the extension of vector autoregressive modeling to high frequency, continuously recorded, and irregularly sampled series. The final chapter combines these approaches with spectral methods for identifying causal dependence between time series. Web ResourceA supplementary website provides the data sets used in the examples as well as documented MATLAB (R) functions and other code for analyzing the examples and producing the illustrations. The site also offers technical details on the estimation theory and methods and the implementation of the models.
High-Performance Computing for Big Data: Methodologies and Applications explores emerging high-performance architectures for data-intensive applications, novel efficient analytical strategies to boost data processing, and cutting-edge applications in diverse fields, such as machine learning, life science, neural networks, and neuromorphic engineering. The book is organized into two main sections. The first section covers Big Data architectures, including cloud computing systems, and heterogeneous accelerators. It also covers emerging 3D IC design principles for memory architectures and devices. The second section of the book illustrates emerging and practical applications of Big Data across several domains, including bioinformatics, deep learning, and neuromorphic engineering. Features Covers a wide range of Big Data architectures, including distributed systems like Hadoop/Spark Includes accelerator-based approaches for big data applications such as GPU-based acceleration techniques, and hardware acceleration such as FPGA/CGRA/ASICs Presents emerging memory architectures and devices such as NVM, STT- RAM, 3D IC design principles Describes advanced algorithms for different big data application domains Illustrates novel analytics techniques for Big Data applications, scheduling, mapping, and partitioning methodologies Featuring contributions from leading experts, this book presents state-of-the-art research on the methodologies and applications of high-performance computing for big data applications. About the Editor Dr. Chao Wang is an Associate Professor in the School of Computer Science at the University of Science and Technology of China. He is the Associate Editor of ACM Transactions on Design Automations for Electronics Systems (TODAES), Applied Soft Computing, Microprocessors and Microsystems, IET Computers & Digital Techniques, and International Journal of Electronics. Dr. Chao Wang was the recipient of Youth Innovation Promotion Association, CAS, ACM China Rising Star Honorable Mention (2016), and best IP nomination of DATE 2015. He is now on the CCF Technical Committee on Computer Architecture, CCF Task Force on Formal Methods. He is a Senior Member of IEEE, Senior Member of CCF, and a Senior Member of ACM.
This work examines theoretical issues, as well as practical developments in statistical inference related to econometric models and analysis. This work offers discussions on such areas as the function of statistics in aggregation, income inequality, poverty, health, spatial econometrics, panel and survey data, bootstrapping and time series.
This is an essential how-to guide on the application of structural equation modeling (SEM) techniques with the AMOS software, focusing on the practical applications of both simple and advanced topics. Written in an easy-to-understand conversational style, the book covers everything from data collection and screening to confirmatory factor analysis, structural model analysis, mediation, moderation, and more advanced topics such as mixture modeling, censored date, and non-recursive models. Through step-by-step instructions, screen shots, and suggested guidelines for reporting, Collier cuts through abstract definitional perspectives to give insight on how to actually run analysis. Unlike other SEM books, the examples used will often start in SPSS and then transition to AMOS so that the reader can have full confidence in running the analysis from beginning to end. Best practices are also included on topics like how to determine if your SEM model is formative or reflective, making it not just an explanation of SEM topics, but a guide for researchers on how to develop a strong methodology while studying their respective phenomenon of interest. With a focus on practical applications of both basic and advanced topics, and with detailed work-through examples throughout, this book is ideal for experienced researchers and beginners across the behavioral and social sciences.
For one-semester business statistics courses. A focus on using statistical methods to analyse and interpret results to make data-informed business decisions Statistics is essential for all business majors, and Business Statistics: A First Course helps students see the role statistics will play in their own careers by providing examples drawn from all functional areas of business. Guided by the principles set forth by major statistical and business science associations (ASA and DSI), plus the authors' diverse experiences, the 8th Edition, Global Edition, continues to innovate and improve the way this course is taught to all students. With new examples, case scenarios, and problems, the text continues its tradition of focusing on the interpretation of results, evaluation of assumptions, and discussion of next steps that lead to data-informed decision making. The authors feel that this approach, rather than a focus on manual calculations, better serves students in their future careers. This brief offering, created to fit the needs of a one-semester course, is part of the established Berenson/Levine series.
The Handbook of U.S. Labor Statistics is recognized as an authoritative resource on the U.S. labor force. It continues and enhances the Bureau of Labor Statistics's (BLS) discontinued publication, Labor Statistics. It allows the user to understand recent developments as well as to compare today's economy with past history. This edition includes new tables on occupational safety and health and income in the United States. The Handbook is a comprehensive reference providing an abundance of data on a variety of topics including: *Employment and unemployment; *Earnings; *Prices; *Productivity; *Consumer expenditures; *Occupational safety and health; *Union membership; *Working poor *And much more! Features of the publication In addition to over 215 tables that present practical data, the Handbook provides: *Introductory material for each chapter that contains highlights of salient data and figures that call attention to noteworthy trends in the data *Notes and definitions, which contain concise descriptions of the data sources, concepts, definitions, and methodology from which the data are derived *References to more comprehensive reports which provide additional data and more extensive descriptions of estimation methods, sampling, and reliability measures
Suitable for statisticians, mathematicians, actuaries, and students interested in the problems of insurance and analysis of lifetimes, Statistical Methods with Applications to Demography and Life Insurance presents contemporary statistical techniques for analyzing life distributions and life insurance problems. It not only contains traditional material but also incorporates new problems and techniques not discussed in existing actuarial literature. The book mainly focuses on the analysis of an individual life and describes statistical methods based on empirical and related processes. Coverage ranges from analyzing the tails of distributions of lifetimes to modeling population dynamics with migrations. To help readers understand the technical points, the text covers topics such as the Stieltjes, Wiener, and Ito integrals. It also introduces other themes of interest in demography, including mixtures of distributions, analysis of longevity and extreme value theory, and the age structure of a population. In addition, the author discusses net premiums for various insurance policies. Mathematical statements are carefully and clearly formulated and proved while avoiding excessive technicalities as much as possible. The book illustrates how these statements help solve numerous statistical problems. It also includes more than 70 exercises.
"It's the economy, stupid," as Democratic strategist James Carville
would say. After many years of study, Ray C. Fair has found that
the state of the economy has a dominant influence on national
elections. Just in time for the 2012 presidential election, this
new edition of his classic text, "Predicting Presidential Elections
and Other Things," provides us with a look into the likely future
of our nation's political landscape--but Fair doesn't stop there.
Originally published in 1978. This book is designed to enable students on main courses in economics to comprehend literature which employs econometric techniques as a method of analysis, to use econometric techniques themselves to test hypotheses about economic relationships and to understand some of the difficulties involved in interpreting results. While the book is mainly aimed at second-year undergraduates undertaking courses in applied economics, its scope is sufficiently wide to take in students at postgraduate level who have no background in econometrics - it integrates fully the mathematical and statistical techniques used in econometrics with micro- and macroeconomic case studies.
The beginning of the age of artificial intelligence and machine learning has created new challenges and opportunities for data analysts, statisticians, mathematicians, econometricians, computer scientists and many others. At the root of these techniques are algorithms and methods for clustering and classifying different types of large datasets, including time series data. Time Series Clustering and Classification includes relevant developments on observation-based, feature-based and model-based traditional and fuzzy clustering methods, feature-based and model-based classification methods, and machine learning methods. It presents a broad and self-contained overview of techniques for both researchers and students. Features Provides an overview of the methods and applications of pattern recognition of time series Covers a wide range of techniques, including unsupervised and supervised approaches Includes a range of real examples from medicine, finance, environmental science, and more R and MATLAB code, and relevant data sets are available on a supplementary website
Pathwise estimation and inference for diffusion market models discusses contemporary techniques for inferring, from options and bond prices, the market participants' aggregate view on important financial parameters such as implied volatility, discount rate, future interest rate, and their uncertainty thereof. The focus is on the pathwise inference methods that are applicable to a sole path of the observed prices and do not require the observation of an ensemble of such paths. This book is pitched at the level of senior undergraduate students undertaking research at honors year, and postgraduate candidates undertaking Master's or PhD degree by research. From a research perspective, this book reaches out to academic researchers from backgrounds as diverse as mathematics and probability, econometrics and statistics, and computational mathematics and optimization whose interest lie in analysis and modelling of financial market data from a multi-disciplinary approach. Additionally, this book is also aimed at financial market practitioners participating in capital market facing businesses who seek to keep abreast with and draw inspiration from novel approaches in market data analysis. The first two chapters of the book contains introductory material on stochastic analysis and the classical diffusion stock market models. The remaining chapters discuss more special stock and bond market models and special methods of pathwise inference for market parameter for different models. The final chapter describes applications of numerical methods of inference of bond market parameters to forecasting of short rate. Nikolai Dokuchaev is an associate professor in Mathematics and Statistics at Curtin University. His research interests include mathematical and statistical finance, stochastic analysis, PDEs, control, and signal processing. Lin Yee Hin is a practitioner in the capital market facing industry. His research interests include econometrics, non-parametric regression, and scientific computing.
The most authoritative and up-to-date core econometrics textbook available Econometrics is the quantitative language of economic theory, analysis, and empirical work, and it has become a cornerstone of graduate economics programs. Econometrics provides graduate and PhD students with an essential introduction to this foundational subject in economics and serves as an invaluable reference for researchers and practitioners. This comprehensive textbook teaches fundamental concepts, emphasizes modern, real-world applications, and gives students an intuitive understanding of econometrics. Covers the full breadth of econometric theory and methods with mathematical rigor while emphasizing intuitive explanations that are accessible to students of all backgrounds Draws on integrated, research-level datasets, provided on an accompanying website Discusses linear econometrics, time series, panel data, nonparametric methods, nonlinear econometric models, and modern machine learning Features hundreds of exercises that enable students to learn by doing Includes in-depth appendices on matrix algebra and useful inequalities and a wealth of real-world examples Can serve as a core textbook for a first-year PhD course in econometrics and as a follow-up to Bruce E. Hansen's Probability and Statistics for Economists
Master key spreadsheet and business analytics skills with SPREADSHEET MODELING AND DECISION ANALYSIS: A PRACTICAL INTRODUCTION TO BUSINESS ANALYTICS, 9E, written by respected business analytics innovator Cliff Ragsdale. This edition's clear presentation, realistic examples, fascinating topics and valuable software provide everything you need to become proficient in today's most widely used business analytics techniques using the latest version of Excel (R) in Microsoft (R) Office 365 or Office 2019. Become skilled in the newest Excel functions as well as Analytic Solver (R) and Data Mining add-ins. This edition helps you develop both algebraic and spreadsheet modeling skills. Step-by-step instructions and annotated, full-color screen images make examples easy to follow and show you how to apply what you learn about descriptive, predictive and prescriptive analytics to real business situations. WebAssign online tools and author-created videos further strengthen understanding.
Companion Website materials: https://tzkeith.com/ Multiple Regression and Beyond offers a conceptually-oriented introduction to multiple regression (MR) analysis and structural equation modeling (SEM), along with analyses that flow naturally from those methods. By focusing on the concepts and purposes of MR and related methods, rather than the derivation and calculation of formulae, this book introduces material to students more clearly, and in a less threatening way. In addition to illuminating content necessary for coursework, the accessibility of this approach means students are more likely to be able to conduct research using MR or SEM--and more likely to use the methods wisely. This book: * Covers both MR and SEM, while explaining their relevance to one another * Includes path analysis, confirmatory factor analysis, and latent growth modeling * Makes extensive use of real-world research examples in the chapters and in the end-of-chapter exercises * Extensive use of figures and tables providing examples and illustrating key concepts and techniques New to this edition: * New chapter on mediation, moderation, and common cause * New chapter on the analysis of interactions with latent variables and multilevel SEM * Expanded coverage of advanced SEM techniques in chapters 18 through 22 * International case studies and examples * Updated instructor and student online resources
World Statistics on Mining and Utilities 2018 provides a unique biennial overview of the role of mining and utility activities in the world economy. This extensive resource from UNIDO provides detailed time series data on the level, structure and growth of international mining and utility activities by country and sector. Country level data is clearly presented on the number of establishments, employment and output of activities such as: coal, iron ore and crude petroleum mining as well as production and supply of electricity, natural gas and water. This unique and comprehensive source of information meets the growing demand of data users who require detailed and reliable statistical information on the primary industry and energy producing sectors. The publication provides internationally comparable data to economic researchers, development strategists and business communities who influence the policy of industrial development and its environmental sustainability.
Advanced Statistics for Kinesiology and Exercise Science is the first textbook to cover advanced statistical methods in the context of the study of human performance. Divided into three distinct sections, the book introduces and explores in depth both analysis of variance (ANOVA) and regressions analyses, including chapters on: preparing data for analysis; one-way, factorial, and repeated-measures ANOVA; analysis of covariance and multiple analyses of variance and covariance; diagnostic tests; regression models for quantitative and qualitative data; model selection and validation; logistic regression Drawing clear lines between the use of IBM SPSS Statistics software and interpreting and analyzing results, and illustrated with sport and exercise science-specific sample data and results sections throughout, the book offers an unparalleled level of detail in explaining advanced statistical techniques to kinesiology students. Advanced Statistics for Kinesiology and Exercise Science is an essential text for any student studying advanced statistics or research methods as part of an undergraduate or postgraduate degree programme in kinesiology, sport and exercise science, or health science.
Modern marketing managers need intuitive and effective tools not just for designing strategies but also for general management. This hands-on book introduces a range of contemporary management and marketing tools and concepts with a focus on forecasting, creating stimulating processes, and implementation. Topics addressed range from creating a clear vision, setting goals, and developing strategies, to implementing strategic analysis tools, consumer value models, budgeting, strategic and operational marketing plans. Special attention is paid to change management and digital transformation in the marketing landscape. Given its approach and content, the book offers a valuable asset for all professionals and advanced MBA students looking for 'real-life' tools and applications.
There is no shortage of incentives to study and reduce poverty in our societies. Poverty is studied in economics and political sciences, and population surveys are an important source of information about it. The design and analysis of such surveys is principally a statistical subject matter and the computer is essential for their data compilation and processing. Focusing on The European Union Statistics on Income and Living Conditions (EU-SILC), a program of annual national surveys which collect data related to poverty and social exclusion, Statistical Studies of Income, Poverty and Inequality in Europe: Computing and Graphics in R presents a set of statistical analyses pertinent to the general goals of EU-SILC. The contents of the volume are biased toward computing and statistics, with reduced attention to economics, political and other social sciences. The emphasis is on methods and procedures as opposed to results, because the data from annual surveys made available since publication and in the near future will degrade the novelty of the data used and the results derived in this volume. The aim of this volume is not to propose specific methods of analysis, but to open up the analytical agenda and address the aspects of the key definitions in the subject of poverty assessment that entail nontrivial elements of arbitrariness. The presented methods do not exhaust the range of analyses suitable for EU-SILC, but will stimulate the search for new methods and adaptation of established methods that cater to the identified purposes.
Prepares readers to analyze data and interpret statistical results using the increasingly popular R more quickly than other texts through LessR extensions which remove the need to program. By introducing R through less R, readers learn how to organize data for analysis, read the data into R, and produce output without performing numerous functions and programming first. Readers can select the necessary procedure and change the relevant variables without programming. Quick Starts introduce readers to the concepts and commands reviewed in the chapters. Margin notes define, illustrate, and cross-reference the key concepts. When readers encounter a term previously discussed, the margin notes identify the page number to the initial introduction. Scenarios highlight the use of a specific analysis followed by the corresponding R/lessR input and an interpretation of the resulting output. Numerous examples of output from psychology, business, education, and other social sciences demonstrate how to interpret results and worked problems help readers test their understanding. www.lessRstats.com website features the lessR program, the book's 2 data sets referenced in standard text and SPSS formats so readers can practice using R/lessR by working through the text examples and worked problems, PDF slides for each chapter, solutions to the book's worked problems, links to R/lessR videos to help readers better understand the program, and more. New to this edition: o upgraded functionality and data visualizations of the lessR package, which is now aesthetically equal to the ggplot 2 R standard o new features to replace and extend previous content, such as aggregating data with pivot tables with a simple lessR function call. |
![]() ![]() You may like...
The Family Lawyer - 3-in-One Collection
James Patterson
Paperback
![]()
|