![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > Economic statistics
Military organizations around the world are normally huge producers and consumers of data. Accordingly, they stand to gain from the many benefits associated with data analytics. However, for leaders in defense organizations-either government or industry-accessible use cases are not always available. This book presents a diverse collection of cases that explore the realm of possibilities in military data analytics. These use cases explore such topics as: Context for maritime situation awareness Data analytics for electric power and energy applications Environmental data analytics in military operations Data analytics and training effectiveness evaluation Harnessing single board computers for military data analytics Analytics for military training in virtual reality environments A chapter on using single board computers explores their application in a variety of domains, including wireless sensor networks, unmanned vehicles, and cluster computing. The investigation into a process for extracting and codifying expert knowledge provides a practical and useful model for soldiers that can support diagnostics, decision making, analysis of alternatives, and myriad other analytical processes. Data analytics is seen as having a role in military learning, and a chapter in the book describes the ongoing work with the United States Army Research Laboratory to apply data analytics techniques to the design of courses, evaluation of individual and group performances, and the ability to tailor the learning experience to achieve optimal learning outcomes in a minimum amount of time. Another chapter discusses how virtual reality and analytics are transforming training of military personnel. Virtual reality and analytics are also transforming monitoring, decision making, readiness, and operations. Military Applications of Data Analytics brings together a collection of technical and application-oriented use cases. It enables decision makers and technologists to make connections between data analytics and such fields as virtual reality and cognitive science that are driving military organizations around the world forward.
Features Accessible to readers with a basic background in probability and statistics Covers fundamental concepts of experimental design and cause-effect relationships Introduces classical ANOVA models, including contrasts and multiple testing Provides an example-based introduction to mixed models Features basic concepts of split-plot and incomplete block designs R code available for all steps Supplementary website with additional resources and updates
Features Accessible to readers with a basic background in probability and statistics Covers fundamental concepts of experimental design and cause-effect relationships Introduces classical ANOVA models, including contrasts and multiple testing Provides an example-based introduction to mixed models Features basic concepts of split-plot and incomplete block designs R code available for all steps Supplementary website with additional resources and updates
Operation Research methods are often used in every field of modern life like industry, economy and medicine. The authors have compiled of the latest advancements in these methods in this volume comprising some of what is considered the best collection of these new approaches. These can be counted as a direct shortcut to what you may search for. This book provides useful applications of the new developments in OR written by leading scientists from some international universities. Another volume about exciting applications of Operations Research is planned in the near future. We hope you enjoy and benefit from this series!
The role of franchising on industry evolution is explored in this book both in terms of the emergence of franchising and its impact on industry structure. Examining literature and statistical information the first section provides an overview of franchising. The Role of Franchising on Industry Evolution then focuses on two core elements; the emergence or franchising and the contextual drivers prompting its adoption, and the impact of franchising on industry-level structural changes. Through two industry case studies, the author demonstrates how franchising has the ability to fundamentally transform an industry's structure from one of fragmentation to one of consolidation.
The design of trading algorithms requires sophisticated mathematical models backed up by reliable data. In this textbook, the authors develop models for algorithmic trading in contexts such as executing large orders, market making, targeting VWAP and other schedules, trading pairs or collection of assets, and executing in dark pools. These models are grounded on how the exchanges work, whether the algorithm is trading with better informed traders (adverse selection), and the type of information available to market participants at both ultra-high and low frequency. Algorithmic and High-Frequency Trading is the first book that combines sophisticated mathematical modelling, empirical facts and financial economics, taking the reader from basic ideas to cutting-edge research and practice. If you need to understand how modern electronic markets operate, what information provides a trading edge, and how other market participants may affect the profitability of the algorithms, then this is the book for you.
In the modern world, data is a vital asset for any organization, regardless of industry or size. The world is built upon data. However, data without knowledge is useless. The aim of this book, briefly, is to introduce new approaches that can be used to shape and forecast the future by combining the two disciplines of Statistics and Economics.Readers of Modeling and Advanced Techniques in Modern Economics can find valuable information from a diverse group of experts on topics such as finance, econometric models, stochastic financial models and machine learning, and application of models to financial and macroeconomic data.
Tackling the cybersecurity challenge is a matter of survival for society at large. Cyber attacks are rapidly increasing in sophistication and magnitude-and in their destructive potential. New threats emerge regularly, the last few years having seen a ransomware boom and distributed denial-of-service attacks leveraging the Internet of Things. For organisations, the use of cybersecurity risk management is essential in order to manage these threats. Yet current frameworks have drawbacks which can lead to the suboptimal allocation of cybersecurity resources. Cyber insurance has been touted as part of the solution - based on the idea that insurers can incentivize companies to improve their cybersecurity by offering premium discounts - but cyber insurance levels remain limited. This is because companies have difficulty determining which cyber insurance products to purchase, and insurance companies struggle to accurately assess cyber risk and thus develop cyber insurance products. To deal with these challenges, this volume presents new models for cybersecurity risk management, partly based on the use of cyber insurance. It contains: A set of mathematical models for cybersecurity risk management, including (i) a model to assist companies in determining their optimal budget allocation between security products and cyber insurance and (ii) a model to assist insurers in designing cyber insurance products. The models use adversarial risk analysis to account for the behavior of threat actors (as well as the behavior of companies and insurers). To inform these models, we draw on psychological and behavioural economics studies of decision-making by individuals regarding cybersecurity and cyber insurance. We also draw on organizational decision-making studies involving cybersecurity and cyber insurance. Its theoretical and methodological findings will appeal to researchers across a wide range of cybersecurity-related disciplines including risk and decision analysis, analytics, technology management, actuarial sciences, behavioural sciences, and economics. The practical findings will help cybersecurity professionals and insurers enhance cybersecurity and cyber insurance, thus benefiting society as a whole. This book grew out of a two-year European Union-funded project under Horizons 2020, called CYBECO (Supporting Cyber Insurance from a Behavioral Choice Perspective).
Applied data-centric social sciences aim to develop both methodology and practical applications of various fields of sciences and businesses with rich data. Specifically, in the social sciences, a vast amount of data on human activities may be useful for understanding collective human nature. In this book, the author introduces several mathematical techniques for handling a huge volume of data and analyzing collective human behavior. The book is constructed from data-oriented investigation, with mathematical methods and expressions used for dealing with data for several specific problems. The fundamental philosophy underlying the book is that both mathematical and physical concepts are determined by the purposes of data analysis. This philosophy is shown throughout exemplar studies of several fields in socio-economic systems. From a data-centric point of view, the author proposes a concept that may change people s minds and cause them to start thinking from the basis of data. Several goals underlie the chapters of the book. The first is to describe mathematical and statistical methods for data analysis, and toward that end the author delineates methods with actual data in each chapter. The second is to find a cyber-physical link between data and data-generating mechanisms, as data are always provided by some kind of data-generating process in the real world. The third goal is to provide an impetus for the concepts and methodology set forth in this book to be applied to socio-economic systems."
Doing Statistical Analysis looks at three kinds of statistical research questions - descriptive, associational, and inferential - and shows students how to conduct statistical analyses and interpret the results. Keeping equations to a minimum, it uses a conversational style and relatable examples such as football, COVID-19, and tourism, to aid understanding. Each chapter contains practice exercises, and a section showing students how to reproduce the statistical results in the book using Stata and SPSS. Digital supplements consist of data sets in Stata, SPSS, and Excel, and a test bank for instructors. Its accessible approach means this is the ideal textbook for undergraduate students across the social and behavioral sciences needing to build their confidence with statistical analysis.
This volume collects seven of Marc Nerlove's previously published, classic essays on panel data econometrics written over the past thirty-five years, together with a cogent essay on the history of the subject, which began with George Biddell Airey's monograph published in 1861. Since Professor Nerlove's 1966 Econometrica paper with Pietro Balestra, panel data and methods of econometric analysis appropriate to such data have become increasingly important in the discipline. The principal factors in the research environment affecting the future course of panel data econometrics are the phenomenal growth in the computational power available to the individual researcher at his or her desktop and the ready availability of data sets, both large and small, via the Internet. The best way to formulate statistical models for inference is motivated and shaped by substantive problems and understanding of the processes generating the data at hand to resolve them. The essays illustrate both the role of the substantive context in shaping appropriate methods of inference and the increasing importance of computer-intensive methods.
This study examines the determinants of current account, export market share and exchange rates. The author identifies key determinants using Bayesian Model Averaging, which allows evaluation of probability that each variable is in fact a determinant of the analysed competitiveness measure. The main implication of the results presented in the study is that increasing international competitiveness is a gradual process that requires institutional and technological changes rather than short-term adjustments in relative prices.
Praise for the first edition: [This book] reflects the extensive experience and significant contributions of the author to non-linear and non-Gaussian modeling. ... [It] is a valuable book, especially with its broad and accessible introduction of models in the state-space framework. -Statistics in Medicine What distinguishes this book from comparable introductory texts is the use of state-space modeling. Along with this come a number of valuable tools for recursive filtering and smoothing, including the Kalman filter, as well as non-Gaussian and sequential Monte Carlo filters. -MAA Reviews Introduction to Time Series Modeling with Applications in R, Second Edition covers numerous stationary and nonstationary time series models and tools for estimating and utilizing them. The goal of this book is to enable readers to build their own models to understand, predict and master time series. The second edition makes it possible for readers to reproduce examples in this book by using the freely available R package TSSS to perform computations for their own real-world time series problems. This book employs the state-space model as a generic tool for time series modeling and presents the Kalman filter, the non-Gaussian filter and the particle filter as convenient tools for recursive estimation for state-space models. Further, it also takes a unified approach based on the entropy maximization principle and employs various methods of parameter estimation and model selection, including the least squares method, the maximum likelihood method, recursive estimation for state-space models and model selection by AIC. Along with the standard stationary time series models, such as the AR and ARMA models, the book also introduces nonstationary time series models such as the locally stationary AR model, the trend model, the seasonal adjustment model, the time-varying coefficient AR model and nonlinear non-Gaussian state-space models. About the Author: Genshiro Kitagawa is a project professor at the University of Tokyo, the former Director-General of the Institute of Statistical Mathematics, and the former President of the Research Organization of Information and Systems.
This book has two components: stochastic dynamics and stochastic random combinatorial analysis. The first discusses evolving patterns of interactions of a large but finite number of agents of several types. Changes of agent types or their choices or decisions over time are formulated as jump Markov processes with suitably specified transition rates: optimisations by agents make these rates generally endogenous. Probabilistic equilibrium selection rules are also discussed, together with the distributions of relative sizes of the bases of attraction. As the number of agents approaches infinity, we recover deterministic macroeconomic relations of more conventional economic models. The second component analyses how agents form clusters of various sizes. This has applications for discussing sizes or shares of markets by various agents which involve some combinatorial analysis patterned after the population genetics literature. These are shown to be relevant to distributions of returns to assets, volatility of returns, and power laws.
Ranking of Multivariate Populations: A Permutation Approach with Applications presents a novel permutation-based nonparametric approach for ranking several multivariate populations. Using data collected from both experimental and observation studies, it covers some of the most useful designs widely applied in research and industry investigations, such as multivariate analysis of variance (MANOVA) and multivariate randomized complete block (MRCB) designs. The first section of the book introduces the topic of ranking multivariate populations by presenting the main theoretical ideas and an in-depth literature review. The second section discusses a large number of real case studies from four specific research areas: new product development in industry, perceived quality of the indoor environment, customer satisfaction, and cytological and histological analysis by image processing. A web-based nonparametric combination global ranking software is also described. Designed for practitioners and postgraduate students in statistics and the applied sciences, this application-oriented book offers a practical guide to the reliable global ranking of multivariate items, such as products, processes, and services, in terms of the performance of all investigated products/prototypes.
Today econometrics has been widely applied in the empirical study of economics. As an empirical science, econometrics uses rigorous mathematical and statistical methods for economic problems. Understanding the methodologies of both econometrics and statistics is a crucial departure for econometrics. The primary focus of this book is to provide an understanding of statistical properties behind econometric methods. Following the introduction in Chapter 1, Chapter 2 provides the methodological review of both econometrics and statistics in different periods since the 1930s. Chapters 3 and 4 explain the underlying theoretical methodologies for estimated equations in the simple regression and multiple regression models and discuss the debates about p-values in particular. This part of the book offers the reader a richer understanding of the methods of statistics behind the methodology of econometrics. Chapters 5-9 of the book are focused on the discussion of regression models using time series data, traditional causal econometric models, and the latest statistical techniques. By concentrating on dynamic structural linear models like state-space models and the Bayesian approach, the book alludes to the fact that this methodological study is not only a science but also an art. This work serves as a handy reference book for anyone interested in econometrics, particularly in relevance to students and academic and business researchers in all quantitative analysis fields.
Explains modern SDC techniques for data stewards and develop tools to implement them. Explains the logic behind modern privacy protections for researchers and how they may use publicly released data to generate valid statistical inferences-as well as the limitations imposed by SDC techniques.
This volume, edited by Jeffrey Racine, Liangjun Su, and Aman Ullah, contains the latest research on nonparametric and semiparametric econometrics and statistics. These data-driven models seek to replace the "classical " parametric models of the past, which were rigid and often linear. Chapters by leading international econometricians and statisticians highlight the interface between econometrics and statistical methods for nonparametric and semiparametric procedures. They provide a balanced view of new developments in the analysis and modeling of applied sciences with cross-section, time series, panel, and spatial data sets. The major topics of the volume include: the methodology of semiparametric models and special regressor methods; inverse, ill-posed, and well-posed problems; different methodologies related to additive models; sieve regression estimators, nonparametric and semiparametric regression models, and the true error of competing approximate models; support vector machines and their modeling of default probability; series estimation of stochastic processes and some of their applications in Econometrics; identification, estimation, and specification problems in a class of semilinear time series models; nonparametric and semiparametric techniques applied to nonstationary or near nonstationary variables; the estimation of a set of regression equations; and a new approach to the analysis of nonparametric models with exogenous treatment assignment.
A comprehensive account of economic size distributions around the world and throughout the years In the course of the past 100 years, economists and applied statisticians have developed a remarkably diverse variety of income distribution models, yet no single resource convincingly accounts for all of these models, analyzing their strengths and weaknesses, similarities and differences. Statistical Size Distributions in Economics and Actuarial Sciences is the first collection to systematically investigate a wide variety of parametric models that deal with income, wealth, and related notions. Christian Kleiber and Samuel Kotz survey, compliment, compare, and unify all of the disparate models of income distribution, highlighting at times a lack of coordination between them that can result in unnecessary duplication. Considering models from eight languages and all continents, the authors discuss the social and economic implications of each as well as distributions of size of loss in actuarial applications. Specific models covered include:
Three appendices provide brief biographies of some of the leading players along with the basic properties of each of the distributions. Actuaries, economists, market researchers, social scientists, and physicists interested in econophysics will find Statistical Size Distributions in Economics and Actuarial Sciences to be a truly one-of-a-kind addition to the professional literature.
With the rapidly advancing fields of Data Analytics and Computational Statistics, it's important to keep up with current trends, methodologies, and applications. This book investigates the role of data mining in computational statistics for machine learning. It offers applications that can be used in various domains and examines the role of transformation functions in optimizing problem statements. Data Analytics, Computational Statistics, and Operations Research for Engineers: Methodologies and Applications presents applications of computationally intensive methods, inference techniques, and survival analysis models. It discusses how data mining extracts information and how machine learning improves the computational model based on the new information. Those interested in this reference work will include students, professionals, and researchers working in the areas of data mining, computational statistics, operations research, and machine learning.
It is well-known that modern stochastic calculus has been exhaustively developed under usual conditions. Despite such a well-developed theory, there is evidence to suggest that these very convenient technical conditions cannot necessarily be fulfilled in real-world applications. Optional Processes: Theory and Applications seeks to delve into the existing theory, new developments and applications of optional processes on "unusual" probability spaces. The development of stochastic calculus of optional processes marks the beginning of a new and more general form of stochastic analysis. This book aims to provide an accessible, comprehensive and up-to-date exposition of optional processes and their numerous properties. Furthermore, the book presents not only current theory of optional processes, but it also contains a spectrum of applications to stochastic differential equations, filtering theory and mathematical finance. Features Suitable for graduate students and researchers in mathematical finance, actuarial science, applied mathematics and related areas Compiles almost all essential results on the calculus of optional processes in unusual probability spaces Contains many advanced analytical results for stochastic differential equations and statistics pertaining to the calculus of optional processes Develops new methods in finance based on optional processes such as a new portfolio theory, defaultable claim pricing mechanism, etc.
This book addresses one of the most important research activities in empirical macroeconomics. It provides a course of advanced but intuitive methods and tools enabling the spatial and temporal disaggregation of basic macroeconomic variables and the assessment of the statistical uncertainty of the outcomes of disaggregation. The empirical analysis focuses mainly on GDP and its growth in the context of Poland. However, all of the methods discussed can be easily applied to other countries. The approach used in the book views spatial and temporal disaggregation as a special case of the estimation of missing observations (a topic on missing data analysis). The book presents an econometric course of models of Seemingly Unrelated Regression Equations (SURE). The main advantage of using the SURE specification is to tackle the presented research problem so that it allows for the heterogeneity of the parameters describing relations between macroeconomic indicators. The book contains model specification, as well as descriptions of stochastic assumptions and resulting procedures of estimation and testing. The method also addresses uncertainty in the estimates produced. All of the necessary tests and assumptions are presented in detail. The results are designed to serve as a source of invaluable information making regional analyses more convenient and - more importantly - comparable. It will create a solid basis for making conclusions and recommendations concerning regional economic policy in Poland, particularly regarding the assessment of the economic situation. This is essential reading for academics, researchers, and economists with regional analysis as their field of expertise, as well as central bankers and policymakers.
This book is an ideal introduction for beginning students of econometrics that assumes only basic familiarity with matrix algebra and calculus. It features practical questions which can be answered using econometric methods and models. Focusing on a limited number of the most basic and widely used methods, the book reviews the basics of econometrics before concluding with a number of recent empirical case studies. The volume is an intuitive illustration of what econometricians do when faced with practical questions.
This book explores Latin American inequality broadly in terms of its impact on the region's development and specifically with two country studies from Peru on earnings inequality and child labor as a consequence of inequality for child labor. The first chapter provides substantial recent undated analysis of the critical thesis of deindustrialization for Latin America. The second chapter provides an approach to measuring labor market discrimination that departs from the current treatment of unobservable influences in the literature. The third chapter examines a much-neglected topic of child labor using a panel data set specifically on children. The book is appropriate for courses on economic development and labor economics and for anyone interested in inequality, development and applied econometrics. |
You may like...
Operations and Supply Chain Management
James Evans, David Collier
Hardcover
Uniform Distribution and Quasi-Monte…
Christoph Aistleitner, Jozsef Beck, …
Hardcover
R5,026
Discovery Miles 50 260
The Leading Indicators - A Short History…
Zachary Karabell
Paperback
Operations And Supply Chain Management
David Collier, James Evans
Hardcover
Qualitative Techniques for Workplace…
Manish Gupta, Musarrat Shaheen, …
Hardcover
R5,332
Discovery Miles 53 320
|