![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > Economic statistics
This textbook introduces readers to practical statistical issues by presenting them within the context of real-life economics and business situations. It presents the subject in a non-threatening manner, with an emphasis on concise, easily understandable explanations. It has been designed to be accessible and student-friendly and, as an added learning feature, provides all the relevant data required to complete the accompanying exercises and computing problems, which are presented at the end of each chapter. It also discusses index numbers and inequality indices in detail, since these are of particular importance to students and commonly omitted in textbooks. Throughout the text it is assumed that the student has no prior knowledge of statistics. It is aimed primarily at business and economics undergraduates, providing them with the basic statistical skills necessary for further study of their subject. However, students of other disciplines will also find it relevant.
Biophysical Measurement in Experimental Social Science Research is an ideal primer for the experimental social scientist wishing to update their knowledge and skillset in the area of laboratory-based biophysical measurement. Many behavioral laboratories across the globe have acquired increasingly sophisticated biophysical measurement equipment, sometimes for particular research projects or for financial or institutional reasons. Yet the expertise required to use this technology and integrate the measures it can generate on human subjects into successful social science research endeavors is often scarce and concentrated amongst a small minority of researchers. This book aims to open the door to wider and more productive use of biophysical measurement in laboratory-based experimental social science research. Suitable for doctoral students through to established researchers, the volume presents examples of the successful integration of biophysical measures into analyses of human behavior, discussions of the academic and practical limitations of laboratory-based biophysical measurement, and hands-on guidance about how different biophysical measurement devices are used. A foreword and concluding chapters comprehensively synthesize and compare biophysical measurement options, address academic, ethical and practical matters, and address the broader historical and scientific context. Research chapters demonstrate the academic potential of biophysical measurement ranging fully across galvanic skin response, heart rate monitoring, eye tracking and direct neurological measurements. An extended Appendix showcases specific examples of device adoption in experimental social science lab settings.
The methodological needs of environmental studies are unique in the breadth of research questions that can be posed, calling for a textbook that covers a broad swath of approaches to conducting research with potentially many different kinds of evidence. Fully updated to address new developments such as the effects of the internet, recent trends in the use of computers, remote sensing, and large data sets, this new edition of Research Methods for Environmental Studies is written specifically for social science-based research into the environment. This revised edition contains new chapters on coding, focus groups, and an extended treatment of hypothesis testing. The textbook covers the best-practice research methods most used to study the environment and its connections to societal and economic activities and objectives. Over five key parts, Kanazawa introduces quantitative and qualitative approaches, mixed methods, and the special requirements of interdisciplinary research, emphasizing that methodological practice should be tailored to the specific needs of the project. Within these parts, detailed coverage is provided on key topics including the identification of a research project, hypothesis testing, spatial analysis, the case study method, ethnographic approaches, discourse analysis, mixed methods, survey and interview techniques, focus groups, and ethical issues in environmental research. Drawing on a variety of extended and updated examples to encourage problem-based learning and fully addressing the challenges associated with interdisciplinary investigation, this book will be an essential resource for students embarking on courses exploring research methods in environmental studies.
Today, information is very important for businesses. Businesses that use information correctly are successful while those that don't, decline. Social media is an important source of data. This data brings us to social media analytics. Surveys are no longer the only way to hear the voice of consumers. With the data obtained from social media platforms, businesses can devise marketing strategies. It provides a better understanding consumer behavior. As consumers are at the center of all business activities, it is unrealistic to succeed without understanding consumption patterns. Social media analytics is useful, especially for marketers. Marketers can evaluate the data to make strategic marketing plans. Social media analytics and consumer behavior are two important issues that need to be addressed together. The book differs in that it handles social media analytics from a different perspective. It is planned that social media analytics will be discussed in detail in terms of consumer behavior in the book. The book will be useful to the students, businesses, and marketers in many aspects.
Essentials of Time Series for Financial Applications serves as an agile reference for upper level students and practitioners who desire a formal, easy-to-follow introduction to the most important time series methods applied in financial applications (pricing, asset management, quant strategies, and risk management). Real-life data and examples developed with EViews illustrate the links between the formal apparatus and the applications. The examples either directly exploit the tools that EViews makes available or use programs that by employing EViews implement specific topics or techniques. The book balances a formal framework with as few proofs as possible against many examples that support its central ideas. Boxes are used throughout to remind readers of technical aspects and definitions and to present examples in a compact fashion, with full details (workout files) available in an on-line appendix. The more advanced chapters provide discussion sections that refer to more advanced textbooks or detailed proofs.
This is the first textbook designed to teach statistics to students in aviation courses. All examples and exercises are grounded in an aviation context, including flight instruction, air traffic control, airport management, and human factors. Structured in six parts, theiscovers the key foundational topics relative to descriptive and inferential statistics, including hypothesis testing, confidence intervals, z and t tests, correlation, regression, ANOVA, and chi-square. In addition, this book promotes both procedural knowledge and conceptual understanding. Detailed, guided examples are presented from the perspective of conducting a research study. Each analysis technique is clearly explained, enabling readers to understand, carry out, and report results correctly. Students are further supported by a range of pedagogical features in each chapter, including objectives, a summary, and a vocabulary check. Digital supplements comprise downloadable data sets and short video lectures explaining key concepts. Instructors also have access to PPT slides and an instructor’s manual that consists of a test bank with multiple choice exams, exercises with data sets, and solutions. This is the ideal statistics textbook for aviation courses globally, especially in aviation statistics, research methods in aviation, human factors, and related areas.
This book is intended to provide the reader with a firm conceptual and empirical understanding of basic information-theoretic econometric models and methods. Because most data are observational, practitioners work with indirect noisy observations and ill-posed econometric models in the form of stochastic inverse problems. Consequently, traditional econometric methods in many cases are not applicable for answering many of the quantitative questions that analysts wish to ask. After initial chapters deal with parametric and semiparametric linear probability models, the focus turns to solving nonparametric stochastic inverse problems. In succeeding chapters, a family of power divergence measure likelihood functions are introduced for a range of traditional and nontraditional econometric-model problems. Finally, within either an empirical maximum likelihood or loss context, Ron C. Mittelhammer and George G. Judge suggest a basis for choosing a member of the divergence family.
This comprehensive book is an introduction to multilevel Bayesian models in R using brms and the Stan programming language. Featuring a series of fully worked analyses of repeated-measures data, focus is placed on active learning through the analyses of the progressively more complicated models presented throughout the book. In this book, the authors offer an introduction to statistics entirely focused on repeated measures data beginning with very simple two-group comparisons and ending with multinomial regression models with many 'random effects'. Across 13 well-structured chapters, readers are provided with all the code necessary to run all the analyses and make all the plots in the book, as well as useful examples of how to interpret and write-up their own analyses. This book provides an accessible introduction for readers in any field, with any level of statistical background. Senior undergraduate students, graduate students, and experienced researchers looking to 'translate' their skills with more traditional models to a Bayesian framework, will benefit greatly from the lessons in this text.
Self-contained chapters on the most important applications and methodologies in finance, which can easily be used for the reader’s research or as a reference for courses on empirical finance. Each chapter is reproducible in the sense that the reader can replicate every single figure, table, or number by simply copy-pasting the code we provide. A full-fledged introduction to machine learning with tidymodels based on tidy principles to show how factor selection and option pricing can benefit from Machine Learning methods. Chapter 2 on accessing & managing financial data shows how to retrieve and prepare the most important datasets in the field of financial economics: CRSP and Compustat. The chapter also contains detailed explanations of the most important data characteristics. Each chapter provides exercises that are based on established lectures and exercise classes and which are designed to help students to dig deeper. The exercises can be used for self-studying or as source of inspiration for teaching exercises.
In this monograph the authors give a systematic approach to the probabilistic properties of the fixed point equation X=AX+B. A probabilistic study of the stochastic recurrence equation X_t=A_tX_{t-1}+B_t for real- and matrix-valued random variables A_t, where (A_t,B_t) constitute an iid sequence, is provided. The classical theory for these equations, including the existence and uniqueness of a stationary solution, the tail behavior with special emphasis on power law behavior, moments and support, is presented. The authors collect recent asymptotic results on extremes, point processes, partial sums (central limit theory with special emphasis on infinite variance stable limit theory), large deviations, in the univariate and multivariate cases, and they further touch on the related topics of smoothing transforms, regularly varying sequences and random iterative systems. The text gives an introduction to the Kesten-Goldie theory for stochastic recurrence equations of the type X_t=A_tX_{t-1}+B_t. It provides the classical results of Kesten, Goldie, Guivarc'h, and others, and gives an overview of recent results on the topic. It presents the state-of-the-art results in the field of affine stochastic recurrence equations and shows relations with non-affine recursions and multivariate regular variation.
The book evaluates the importance of constitutional rules and property rights for the German economy in 1990-2015. It is an economic historical study embedded in institutional economics with main references to positive constitutional economics and the property rights theory. This interdisciplinary work adopts a theoretical-empirical dimension and a qualitative-quantitative approach. Formal institutions played a fundamental role in Germany's post-reunification economic changes. They set the legal and institutional framework for the transition process of Eastern Germany and the unification, integration and convergence between the two parts of the country. Although the latter process was not completed, the effects of these formal rules were positive, especially for the former GDR.
The book describes the theoretical principles of nonstatistical methods of data analysis but without going deep into complex mathematics. The emphasis is laid on presentation of solved examples of real data either from authors' laboratories or from open literature. The examples cover wide range of applications such as quality assurance and quality control, critical analysis of experimental data, comparison of data samples from various sources, robust linear and nonlinear regression as well as various tasks from financial analysis. The examples are useful primarily for chemical engineers including analytical/quality laboratories in industry, designers of chemical and biological processes. Features: Exclusive title on Mathematical Gnostics with multidisciplinary applications, and specific focus on chemical engineering. Clarifies the role of data space metrics including the right way of aggregation of uncertain data. Brings a new look on the data probability, information, entropy and thermodynamics of data uncertainty. Enables design of probability distributions for all real data samples including smaller ones. Includes data for examples with solutions with exercises in R or Python. The book is aimed for Senior Undergraduate Students, Researchers, and Professionals in Chemical/Process Engineering, Engineering Physics, Stats, Mathematics, Materials, Geotechnical, Civil Engineering, Mining, Sales, Marketing and Service, and Finance.
Military organizations around the world are normally huge producers and consumers of data. Accordingly, they stand to gain from the many benefits associated with data analytics. However, for leaders in defense organizations-either government or industry-accessible use cases are not always available. This book presents a diverse collection of cases that explore the realm of possibilities in military data analytics. These use cases explore such topics as: Context for maritime situation awareness Data analytics for electric power and energy applications Environmental data analytics in military operations Data analytics and training effectiveness evaluation Harnessing single board computers for military data analytics Analytics for military training in virtual reality environments A chapter on using single board computers explores their application in a variety of domains, including wireless sensor networks, unmanned vehicles, and cluster computing. The investigation into a process for extracting and codifying expert knowledge provides a practical and useful model for soldiers that can support diagnostics, decision making, analysis of alternatives, and myriad other analytical processes. Data analytics is seen as having a role in military learning, and a chapter in the book describes the ongoing work with the United States Army Research Laboratory to apply data analytics techniques to the design of courses, evaluation of individual and group performances, and the ability to tailor the learning experience to achieve optimal learning outcomes in a minimum amount of time. Another chapter discusses how virtual reality and analytics are transforming training of military personnel. Virtual reality and analytics are also transforming monitoring, decision making, readiness, and operations. Military Applications of Data Analytics brings together a collection of technical and application-oriented use cases. It enables decision makers and technologists to make connections between data analytics and such fields as virtual reality and cognitive science that are driving military organizations around the world forward.
Dependence Modeling with Copulas covers the substantial advances that have taken place in the field during the last 15 years, including vine copula modeling of high-dimensional data. Vine copula models are constructed from a sequence of bivariate copulas. The book develops generalizations of vine copula models, including common and structured factor models that extend from the Gaussian assumption to copulas. It also discusses other multivariate constructions and parametric copula families that have different tail properties and presents extensive material on dependence and tail properties to assist in copula model selection. The author shows how numerical methods and algorithms for inference and simulation are important in high-dimensional copula applications. He presents the algorithms as pseudocode, illustrating their implementation for high-dimensional copula models. He also incorporates results to determine dependence and tail properties of multivariate distributions for future constructions of copula models.
This book is summarizing the results of the workshop "Uniform Distribution and Quasi-Monte Carlo Methods" of the RICAM Special Semester on "Applications of Algebra and Number Theory" in October 2013. The survey articles in this book focus on number theoretic point constructions, uniform distribution theory, and quasi-Monte Carlo methods. As deterministic versions of the Monte Carlo method, quasi-Monte Carlo rules enjoy increasing popularity, with many fruitful applications in mathematical practice, as for example in finance, computer graphics, and biology. The goal of this book is to give an overview of recent developments in uniform distribution theory, quasi-Monte Carlo methods, and their applications, presented by leading experts in these vivid fields of research.
The second book in a set of ten on quantitative finance for practitioners Presents the theory needed to better understand applications Supplements previous training in mathematics Built from the author's four decades of experience in industry, research, and teaching
In-depth coverage of discrete-time theory and methodology. Numerous, fully worked out examples and exercises in every chapter. Mathematically rigorous and consistent yet bridging various basic and more advanced concepts. Judicious balance of financial theory, mathematical, and computational methods. Guide to Material.
The second book in a set of ten on quantitative finance for practitioners Presents the theory needed to better understand applications Supplements previous training in mathematics Built from the author's four decades of experience in industry, research, and teaching
Operation Research methods are often used in every field of modern life like industry, economy and medicine. The authors have compiled of the latest advancements in these methods in this volume comprising some of what is considered the best collection of these new approaches. These can be counted as a direct shortcut to what you may search for. This book provides useful applications of the new developments in OR written by leading scientists from some international universities. Another volume about exciting applications of Operations Research is planned in the near future. We hope you enjoy and benefit from this series!
Features Accessible to readers with a basic background in probability and statistics Covers fundamental concepts of experimental design and cause-effect relationships Introduces classical ANOVA models, including contrasts and multiple testing Provides an example-based introduction to mixed models Features basic concepts of split-plot and incomplete block designs R code available for all steps Supplementary website with additional resources and updates
Features Accessible to readers with a basic background in probability and statistics Covers fundamental concepts of experimental design and cause-effect relationships Introduces classical ANOVA models, including contrasts and multiple testing Provides an example-based introduction to mixed models Features basic concepts of split-plot and incomplete block designs R code available for all steps Supplementary website with additional resources and updates
The role of franchising on industry evolution is explored in this book both in terms of the emergence of franchising and its impact on industry structure. Examining literature and statistical information the first section provides an overview of franchising. The Role of Franchising on Industry Evolution then focuses on two core elements; the emergence or franchising and the contextual drivers prompting its adoption, and the impact of franchising on industry-level structural changes. Through two industry case studies, the author demonstrates how franchising has the ability to fundamentally transform an industry's structure from one of fragmentation to one of consolidation.
In the modern world, data is a vital asset for any organization, regardless of industry or size. The world is built upon data. However, data without knowledge is useless. The aim of this book, briefly, is to introduce new approaches that can be used to shape and forecast the future by combining the two disciplines of Statistics and Economics.Readers of Modeling and Advanced Techniques in Modern Economics can find valuable information from a diverse group of experts on topics such as finance, econometric models, stochastic financial models and machine learning, and application of models to financial and macroeconomic data.
Tackling the cybersecurity challenge is a matter of survival for society at large. Cyber attacks are rapidly increasing in sophistication and magnitude-and in their destructive potential. New threats emerge regularly, the last few years having seen a ransomware boom and distributed denial-of-service attacks leveraging the Internet of Things. For organisations, the use of cybersecurity risk management is essential in order to manage these threats. Yet current frameworks have drawbacks which can lead to the suboptimal allocation of cybersecurity resources. Cyber insurance has been touted as part of the solution - based on the idea that insurers can incentivize companies to improve their cybersecurity by offering premium discounts - but cyber insurance levels remain limited. This is because companies have difficulty determining which cyber insurance products to purchase, and insurance companies struggle to accurately assess cyber risk and thus develop cyber insurance products. To deal with these challenges, this volume presents new models for cybersecurity risk management, partly based on the use of cyber insurance. It contains: A set of mathematical models for cybersecurity risk management, including (i) a model to assist companies in determining their optimal budget allocation between security products and cyber insurance and (ii) a model to assist insurers in designing cyber insurance products. The models use adversarial risk analysis to account for the behavior of threat actors (as well as the behavior of companies and insurers). To inform these models, we draw on psychological and behavioural economics studies of decision-making by individuals regarding cybersecurity and cyber insurance. We also draw on organizational decision-making studies involving cybersecurity and cyber insurance. Its theoretical and methodological findings will appeal to researchers across a wide range of cybersecurity-related disciplines including risk and decision analysis, analytics, technology management, actuarial sciences, behavioural sciences, and economics. The practical findings will help cybersecurity professionals and insurers enhance cybersecurity and cyber insurance, thus benefiting society as a whole. This book grew out of a two-year European Union-funded project under Horizons 2020, called CYBECO (Supporting Cyber Insurance from a Behavioral Choice Perspective).
Praise for the first edition: [This book] reflects the extensive experience and significant contributions of the author to non-linear and non-Gaussian modeling. ... [It] is a valuable book, especially with its broad and accessible introduction of models in the state-space framework. -Statistics in Medicine What distinguishes this book from comparable introductory texts is the use of state-space modeling. Along with this come a number of valuable tools for recursive filtering and smoothing, including the Kalman filter, as well as non-Gaussian and sequential Monte Carlo filters. -MAA Reviews Introduction to Time Series Modeling with Applications in R, Second Edition covers numerous stationary and nonstationary time series models and tools for estimating and utilizing them. The goal of this book is to enable readers to build their own models to understand, predict and master time series. The second edition makes it possible for readers to reproduce examples in this book by using the freely available R package TSSS to perform computations for their own real-world time series problems. This book employs the state-space model as a generic tool for time series modeling and presents the Kalman filter, the non-Gaussian filter and the particle filter as convenient tools for recursive estimation for state-space models. Further, it also takes a unified approach based on the entropy maximization principle and employs various methods of parameter estimation and model selection, including the least squares method, the maximum likelihood method, recursive estimation for state-space models and model selection by AIC. Along with the standard stationary time series models, such as the AR and ARMA models, the book also introduces nonstationary time series models such as the locally stationary AR model, the trend model, the seasonal adjustment model, the time-varying coefficient AR model and nonlinear non-Gaussian state-space models. About the Author: Genshiro Kitagawa is a project professor at the University of Tokyo, the former Director-General of the Institute of Statistical Mathematics, and the former President of the Research Organization of Information and Systems. |
You may like...
Quantitative Quality of Service for Grid…
Lizhe Wang, Jinjun Chen, …
Hardcover
R5,012
Discovery Miles 50 120
Architecting High Performing, Scalable…
Shailesh Kumar Shivakumar
Paperback
R1,137
Discovery Miles 11 370
Embedded Computing for High Performance…
Joao Manuel Paiva Cardoso, Jose Gabriel de Figueired Coutinho, …
Paperback
Introduction to Hardware Security and…
Mohammad Tehranipoor, Cliff Wang
Hardcover
R4,303
Discovery Miles 43 030
Model Driven Development for Embedded…
Jean-Aime Maxa, Mohamed Slim Ben Mahmoud, …
Hardcover
Practical TCP/IP and Ethernet Networking…
Deon Reynders, Edwin Wright
Paperback
R1,491
Discovery Miles 14 910
Origins and Foundations of Computing…
Heinz Nixdorf Museums Forum
Hardcover
|