![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > Economic statistics
Doing Statistical Analysis looks at three kinds of statistical research questions - descriptive, associational, and inferential - and shows students how to conduct statistical analyses and interpret the results. Keeping equations to a minimum, it uses a conversational style and relatable examples such as football, COVID-19, and tourism, to aid understanding. Each chapter contains practice exercises, and a section showing students how to reproduce the statistical results in the book using Stata and SPSS. Digital supplements consist of data sets in Stata, SPSS, and Excel, and a test bank for instructors. Its accessible approach means this is the ideal textbook for undergraduate students across the social and behavioral sciences needing to build their confidence with statistical analysis.
Tackling the cybersecurity challenge is a matter of survival for society at large. Cyber attacks are rapidly increasing in sophistication and magnitude-and in their destructive potential. New threats emerge regularly, the last few years having seen a ransomware boom and distributed denial-of-service attacks leveraging the Internet of Things. For organisations, the use of cybersecurity risk management is essential in order to manage these threats. Yet current frameworks have drawbacks which can lead to the suboptimal allocation of cybersecurity resources. Cyber insurance has been touted as part of the solution - based on the idea that insurers can incentivize companies to improve their cybersecurity by offering premium discounts - but cyber insurance levels remain limited. This is because companies have difficulty determining which cyber insurance products to purchase, and insurance companies struggle to accurately assess cyber risk and thus develop cyber insurance products. To deal with these challenges, this volume presents new models for cybersecurity risk management, partly based on the use of cyber insurance. It contains: A set of mathematical models for cybersecurity risk management, including (i) a model to assist companies in determining their optimal budget allocation between security products and cyber insurance and (ii) a model to assist insurers in designing cyber insurance products. The models use adversarial risk analysis to account for the behavior of threat actors (as well as the behavior of companies and insurers). To inform these models, we draw on psychological and behavioural economics studies of decision-making by individuals regarding cybersecurity and cyber insurance. We also draw on organizational decision-making studies involving cybersecurity and cyber insurance. Its theoretical and methodological findings will appeal to researchers across a wide range of cybersecurity-related disciplines including risk and decision analysis, analytics, technology management, actuarial sciences, behavioural sciences, and economics. The practical findings will help cybersecurity professionals and insurers enhance cybersecurity and cyber insurance, thus benefiting society as a whole. This book grew out of a two-year European Union-funded project under Horizons 2020, called CYBECO (Supporting Cyber Insurance from a Behavioral Choice Perspective).
Applied data-centric social sciences aim to develop both methodology and practical applications of various fields of sciences and businesses with rich data. Specifically, in the social sciences, a vast amount of data on human activities may be useful for understanding collective human nature. In this book, the author introduces several mathematical techniques for handling a huge volume of data and analyzing collective human behavior. The book is constructed from data-oriented investigation, with mathematical methods and expressions used for dealing with data for several specific problems. The fundamental philosophy underlying the book is that both mathematical and physical concepts are determined by the purposes of data analysis. This philosophy is shown throughout exemplar studies of several fields in socio-economic systems. From a data-centric point of view, the author proposes a concept that may change people s minds and cause them to start thinking from the basis of data. Several goals underlie the chapters of the book. The first is to describe mathematical and statistical methods for data analysis, and toward that end the author delineates methods with actual data in each chapter. The second is to find a cyber-physical link between data and data-generating mechanisms, as data are always provided by some kind of data-generating process in the real world. The third goal is to provide an impetus for the concepts and methodology set forth in this book to be applied to socio-economic systems."
This volume collects seven of Marc Nerlove's previously published, classic essays on panel data econometrics written over the past thirty-five years, together with a cogent essay on the history of the subject, which began with George Biddell Airey's monograph published in 1861. Since Professor Nerlove's 1966 Econometrica paper with Pietro Balestra, panel data and methods of econometric analysis appropriate to such data have become increasingly important in the discipline. The principal factors in the research environment affecting the future course of panel data econometrics are the phenomenal growth in the computational power available to the individual researcher at his or her desktop and the ready availability of data sets, both large and small, via the Internet. The best way to formulate statistical models for inference is motivated and shaped by substantive problems and understanding of the processes generating the data at hand to resolve them. The essays illustrate both the role of the substantive context in shaping appropriate methods of inference and the increasing importance of computer-intensive methods.
Ranking of Multivariate Populations: A Permutation Approach with Applications presents a novel permutation-based nonparametric approach for ranking several multivariate populations. Using data collected from both experimental and observation studies, it covers some of the most useful designs widely applied in research and industry investigations, such as multivariate analysis of variance (MANOVA) and multivariate randomized complete block (MRCB) designs. The first section of the book introduces the topic of ranking multivariate populations by presenting the main theoretical ideas and an in-depth literature review. The second section discusses a large number of real case studies from four specific research areas: new product development in industry, perceived quality of the indoor environment, customer satisfaction, and cytological and histological analysis by image processing. A web-based nonparametric combination global ranking software is also described. Designed for practitioners and postgraduate students in statistics and the applied sciences, this application-oriented book offers a practical guide to the reliable global ranking of multivariate items, such as products, processes, and services, in terms of the performance of all investigated products/prototypes.
This study examines the determinants of current account, export market share and exchange rates. The author identifies key determinants using Bayesian Model Averaging, which allows evaluation of probability that each variable is in fact a determinant of the analysed competitiveness measure. The main implication of the results presented in the study is that increasing international competitiveness is a gradual process that requires institutional and technological changes rather than short-term adjustments in relative prices.
This book has two components: stochastic dynamics and stochastic random combinatorial analysis. The first discusses evolving patterns of interactions of a large but finite number of agents of several types. Changes of agent types or their choices or decisions over time are formulated as jump Markov processes with suitably specified transition rates: optimisations by agents make these rates generally endogenous. Probabilistic equilibrium selection rules are also discussed, together with the distributions of relative sizes of the bases of attraction. As the number of agents approaches infinity, we recover deterministic macroeconomic relations of more conventional economic models. The second component analyses how agents form clusters of various sizes. This has applications for discussing sizes or shares of markets by various agents which involve some combinatorial analysis patterned after the population genetics literature. These are shown to be relevant to distributions of returns to assets, volatility of returns, and power laws.
Today econometrics has been widely applied in the empirical study of economics. As an empirical science, econometrics uses rigorous mathematical and statistical methods for economic problems. Understanding the methodologies of both econometrics and statistics is a crucial departure for econometrics. The primary focus of this book is to provide an understanding of statistical properties behind econometric methods. Following the introduction in Chapter 1, Chapter 2 provides the methodological review of both econometrics and statistics in different periods since the 1930s. Chapters 3 and 4 explain the underlying theoretical methodologies for estimated equations in the simple regression and multiple regression models and discuss the debates about p-values in particular. This part of the book offers the reader a richer understanding of the methods of statistics behind the methodology of econometrics. Chapters 5-9 of the book are focused on the discussion of regression models using time series data, traditional causal econometric models, and the latest statistical techniques. By concentrating on dynamic structural linear models like state-space models and the Bayesian approach, the book alludes to the fact that this methodological study is not only a science but also an art. This work serves as a handy reference book for anyone interested in econometrics, particularly in relevance to students and academic and business researchers in all quantitative analysis fields.
It is well-known that modern stochastic calculus has been exhaustively developed under usual conditions. Despite such a well-developed theory, there is evidence to suggest that these very convenient technical conditions cannot necessarily be fulfilled in real-world applications. Optional Processes: Theory and Applications seeks to delve into the existing theory, new developments and applications of optional processes on "unusual" probability spaces. The development of stochastic calculus of optional processes marks the beginning of a new and more general form of stochastic analysis. This book aims to provide an accessible, comprehensive and up-to-date exposition of optional processes and their numerous properties. Furthermore, the book presents not only current theory of optional processes, but it also contains a spectrum of applications to stochastic differential equations, filtering theory and mathematical finance. Features Suitable for graduate students and researchers in mathematical finance, actuarial science, applied mathematics and related areas Compiles almost all essential results on the calculus of optional processes in unusual probability spaces Contains many advanced analytical results for stochastic differential equations and statistics pertaining to the calculus of optional processes Develops new methods in finance based on optional processes such as a new portfolio theory, defaultable claim pricing mechanism, etc.
This book addresses one of the most important research activities in empirical macroeconomics. It provides a course of advanced but intuitive methods and tools enabling the spatial and temporal disaggregation of basic macroeconomic variables and the assessment of the statistical uncertainty of the outcomes of disaggregation. The empirical analysis focuses mainly on GDP and its growth in the context of Poland. However, all of the methods discussed can be easily applied to other countries. The approach used in the book views spatial and temporal disaggregation as a special case of the estimation of missing observations (a topic on missing data analysis). The book presents an econometric course of models of Seemingly Unrelated Regression Equations (SURE). The main advantage of using the SURE specification is to tackle the presented research problem so that it allows for the heterogeneity of the parameters describing relations between macroeconomic indicators. The book contains model specification, as well as descriptions of stochastic assumptions and resulting procedures of estimation and testing. The method also addresses uncertainty in the estimates produced. All of the necessary tests and assumptions are presented in detail. The results are designed to serve as a source of invaluable information making regional analyses more convenient and - more importantly - comparable. It will create a solid basis for making conclusions and recommendations concerning regional economic policy in Poland, particularly regarding the assessment of the economic situation. This is essential reading for academics, researchers, and economists with regional analysis as their field of expertise, as well as central bankers and policymakers.
How we pay is so fundamental that it underpins everything – from trade to taxation, stocks and savings to salaries, pensions and pocket money. Rich or poor, criminal, communist or capitalist, we all rely on the same payments system, day in, day out. It sits between us and not just economic meltdown, but a total breakdown in law and order. Why then do we know so little about how that system really works? Leibbrandt and de Terán shine a light on the hidden workings of the humble payment – and reveal both how our payment habits are determined by history as well as where we might go next. From national customs to warring nation states, geopolitics will shape the future of payments every bit as much as technology. Challenging our understanding about where financial power really lies, The Pay Off shows us that the most important thing about money is the way we move it.
Explains modern SDC techniques for data stewards and develop tools to implement them. Explains the logic behind modern privacy protections for researchers and how they may use publicly released data to generate valid statistical inferences-as well as the limitations imposed by SDC techniques.
With the rapidly advancing fields of Data Analytics and Computational Statistics, it's important to keep up with current trends, methodologies, and applications. This book investigates the role of data mining in computational statistics for machine learning. It offers applications that can be used in various domains and examines the role of transformation functions in optimizing problem statements. Data Analytics, Computational Statistics, and Operations Research for Engineers: Methodologies and Applications presents applications of computationally intensive methods, inference techniques, and survival analysis models. It discusses how data mining extracts information and how machine learning improves the computational model based on the new information. Those interested in this reference work will include students, professionals, and researchers working in the areas of data mining, computational statistics, operations research, and machine learning.
This volume, edited by Jeffrey Racine, Liangjun Su, and Aman Ullah, contains the latest research on nonparametric and semiparametric econometrics and statistics. These data-driven models seek to replace the "classical " parametric models of the past, which were rigid and often linear. Chapters by leading international econometricians and statisticians highlight the interface between econometrics and statistical methods for nonparametric and semiparametric procedures. They provide a balanced view of new developments in the analysis and modeling of applied sciences with cross-section, time series, panel, and spatial data sets. The major topics of the volume include: the methodology of semiparametric models and special regressor methods; inverse, ill-posed, and well-posed problems; different methodologies related to additive models; sieve regression estimators, nonparametric and semiparametric regression models, and the true error of competing approximate models; support vector machines and their modeling of default probability; series estimation of stochastic processes and some of their applications in Econometrics; identification, estimation, and specification problems in a class of semilinear time series models; nonparametric and semiparametric techniques applied to nonstationary or near nonstationary variables; the estimation of a set of regression equations; and a new approach to the analysis of nonparametric models with exogenous treatment assignment.
A comprehensive account of economic size distributions around the world and throughout the years In the course of the past 100 years, economists and applied statisticians have developed a remarkably diverse variety of income distribution models, yet no single resource convincingly accounts for all of these models, analyzing their strengths and weaknesses, similarities and differences. Statistical Size Distributions in Economics and Actuarial Sciences is the first collection to systematically investigate a wide variety of parametric models that deal with income, wealth, and related notions. Christian Kleiber and Samuel Kotz survey, compliment, compare, and unify all of the disparate models of income distribution, highlighting at times a lack of coordination between them that can result in unnecessary duplication. Considering models from eight languages and all continents, the authors discuss the social and economic implications of each as well as distributions of size of loss in actuarial applications. Specific models covered include:
Three appendices provide brief biographies of some of the leading players along with the basic properties of each of the distributions. Actuaries, economists, market researchers, social scientists, and physicists interested in econophysics will find Statistical Size Distributions in Economics and Actuarial Sciences to be a truly one-of-a-kind addition to the professional literature.
Introduction to Functional Data Analysis provides a concise textbook introduction to the field. It explains how to analyze functional data, both at exploratory and inferential levels. It also provides a systematic and accessible exposition of the methodology and the required mathematical framework. The book can be used as textbook for a semester-long course on FDA for advanced undergraduate or MS statistics majors, as well as for MS and PhD students in other disciplines, including applied mathematics, environmental science, public health, medical research, geophysical sciences and economics. It can also be used for self-study and as a reference for researchers in those fields who wish to acquire solid understanding of FDA methodology and practical guidance for its implementation. Each chapter contains plentiful examples of relevant R code and theoretical and data analytic problems. The material of the book can be roughly divided into four parts of approximately equal length: 1) basic concepts and techniques of FDA, 2) functional regression models, 3) sparse and dependent functional data, and 4) introduction to the Hilbert space framework of FDA. The book assumes advanced undergraduate background in calculus, linear algebra, distributional probability theory, foundations of statistical inference, and some familiarity with R programming. Other required statistics background is provided in scalar settings before the related functional concepts are developed. Most chapters end with references to more advanced research for those who wish to gain a more in-depth understanding of a specific topic.
This book is an ideal introduction for beginning students of econometrics that assumes only basic familiarity with matrix algebra and calculus. It features practical questions which can be answered using econometric methods and models. Focusing on a limited number of the most basic and widely used methods, the book reviews the basics of econometrics before concluding with a number of recent empirical case studies. The volume is an intuitive illustration of what econometricians do when faced with practical questions.
This book explores Latin American inequality broadly in terms of its impact on the region's development and specifically with two country studies from Peru on earnings inequality and child labor as a consequence of inequality for child labor. The first chapter provides substantial recent undated analysis of the critical thesis of deindustrialization for Latin America. The second chapter provides an approach to measuring labor market discrimination that departs from the current treatment of unobservable influences in the literature. The third chapter examines a much-neglected topic of child labor using a panel data set specifically on children. The book is appropriate for courses on economic development and labor economics and for anyone interested in inequality, development and applied econometrics.
Contains information for using R software with the examples in the textbook Sampling: Design and Analysis, 3rd edition by Sharon L. Lohr.
"A book perfect for this moment" -Katherine M. O'Regan, Former Assistant Secretary, US Department of Housing and Urban Development More than fifty years after the passage of the Fair Housing Act, American cities remain divided along the very same lines that this landmark legislation explicitly outlawed. Keeping Races in Their Places tells the story of these lines-who drew them, why they drew them, where they drew them, and how they continue to circumscribe residents' opportunities to this very day. Weaving together sophisticated statistical analyses of more than a century's worth of data with an engaging, accessible narrative that brings the numbers to life, Keeping Races in Their Places exposes the entrenched effects of redlining on American communities. This one-of-a-kind contribution to the real estate and urban economics literature applies the author's original geographic information systems analyses to historical maps to reveal redlining's causal role in shaping today's cities. Spanning the era from the Great Migration to the Great Recession, Keeping Races in Their Places uncovers the roots of the Black-white wealth gap, the subprime lending crisis, and today's lack of affordable housing in maps created by banks nearly a century ago. Most of all, it offers hope that with the latest scholarly tools we can pinpoint how things went wrong-and what we must do to make them right.
"A book perfect for this moment" -Katherine M. O'Regan, Former Assistant Secretary, US Department of Housing and Urban Development More than fifty years after the passage of the Fair Housing Act, American cities remain divided along the very same lines that this landmark legislation explicitly outlawed. Keeping Races in Their Places tells the story of these lines-who drew them, why they drew them, where they drew them, and how they continue to circumscribe residents' opportunities to this very day. Weaving together sophisticated statistical analyses of more than a century's worth of data with an engaging, accessible narrative that brings the numbers to life, Keeping Races in Their Places exposes the entrenched effects of redlining on American communities. This one-of-a-kind contribution to the real estate and urban economics literature applies the author's original geographic information systems analyses to historical maps to reveal redlining's causal role in shaping today's cities. Spanning the era from the Great Migration to the Great Recession, Keeping Races in Their Places uncovers the roots of the Black-white wealth gap, the subprime lending crisis, and today's lack of affordable housing in maps created by banks nearly a century ago. Most of all, it offers hope that with the latest scholarly tools we can pinpoint how things went wrong-and what we must do to make them right.
Technical Analysis of Stock Trends helps investors make smart, profitable trading decisions by providing proven long- and short-term stock trend analysis. It gets right to the heart of effective technical trading concepts, explaining technical theory such as The Dow Theory, reversal patterns, consolidation formations, trends and channels, technical analysis of commodity charts, and advances in investment technology. It also includes a comprehensive guide to trading tactics from long and short goals, stock selection, charting, low and high risk, trend recognition tools, balancing and diversifying the stock portfolio, application of capital, and risk management. This updated new edition includes patterns and modifiable charts that are tighter and more illustrative. Expanded material is also included on Pragmatic Portfolio Theory as a more elegant alternative to Modern Portfolio Theory; and a newer, simpler, and more powerful alternative to Dow Theory is presented. This book is the perfect introduction, giving you the knowledge and wisdom to craft long-term success.
The book has been tested and refined through years of classroom teaching experience. With an abundance of examples, problems, and fully worked out solutions, the text introduces the financial theory and relevant mathematical methods in a mathematically rigorous yet engaging way. This textbook provides complete coverage of discrete-time financial models that form the cornerstones of financial derivative pricing theory. Unlike similar texts in the field, this one presents multiple problem-solving approaches, linking related comprehensive techniques for pricing different types of financial derivatives. Key features: In-depth coverage of discrete-time theory and methodology. Numerous, fully worked out examples and exercises in every chapter. Mathematically rigorous and consistent yet bridging various basic and more advanced concepts. Judicious balance of financial theory, mathematical, and computational methods. Guide to Material. This revision contains: Almost 200 pages worth of new material in all chapters. A new chapter on elementary probability theory. An expanded the set of solved problems and additional exercises. Answers to all exercises. This book is a comprehensive, self-contained, and unified treatment of the main theory and application of mathematical methods behind modern-day financial mathematics. Table of Contents List of Figures and Tables Preface I Introduction to Pricing and Management of Financial Securities 1 Mathematics of Compounding 2 Primer on Pricing Risky Securities 3 Portfolio Management 4 Primer on Derivative Securities II Discrete-Time Modelling 5 Single-Period Arrow-Debreu Models 6 Introduction to Discrete-Time Stochastic Calculus 7 Replication and Pricing in the Binomial Tree Model 8 General Multi-Asset Multi-Period Model Appendices A Elementary Probability Theory B Glossary of Symbols and Abbreviations C Answers and Hints to Exercises References Index Biographies Giuseppe Campolieti is Professor of Mathematics at Wilfrid Laurier University in Waterloo, Canada. He has been Natural Sciences and Engineering Research Council postdoctoral research fellow and university research fellow at the University of Toronto. In 1998, he joined the Masters in Mathematical Finance as an instructor and later as an adjunct professor in financial mathematics until 2002. Dr. Campolieti also founded a financial software and consulting company in 1998. He joined Laurier in 2002 as Associate Professor of Mathematics and as SHARCNET Chair in Financial Mathematics. Roman N. Makarov is Associate Professor and Chair of Mathematics at Wilfrid Laurier University. Prior to joining Laurier in 2003, he was an Assistant Professor of Mathematics at Siberian State University of Telecommunications and Informatics and a senior research fellow at the Laboratory of Monte Carlo Methods at the Institute of Computational Mathematics and Mathematical Geophysics in Novosibirsk, Russia. |
You may like...
Project Management For Engineering…
John M. Nicholas, Herman Steyn
Paperback
R581
Discovery Miles 5 810
Financial Mathematics - A Computational…
K. Pereira, N. Modhien, …
Paperback
R326
Discovery Miles 3 260
|