![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > Economic statistics
This book is an introduction to regression analysis, focusing on the practicalities of doing regression analysis on real-life data. Contrary to other textbooks on regression, this book is based on the idea that you do not necessarily need to know much about statistics and mathematics to get a firm grip on regression and perform it to perfection. This non-technical point of departure is complemented by practical examples of real-life data analysis using statistics software such as Stata, R and SPSS. Parts 1 and 2 of the book cover the basics, such as simple linear regression, multiple linear regression, how to interpret the output from statistics programs, significance testing and the key regression assumptions. Part 3 deals with how to practically handle violations of the classical linear regression assumptions, regression modeling for categorical y-variables and instrumental variable (IV) regression. Part 4 puts the various purposes of, or motivations for, regression into the wider context of writing a scholarly report and points to some extensions to related statistical techniques. This book is written primarily for those who need to do regression analysis in practice, and not only to understand how this method works in theory. The book's accessible approach is recommended for students from across the social sciences.
Assuming no prior knowledge or technical skills, Getting Started with Business Analytics: Insightful Decision-Making explores the contents, capabilities, and applications of business analytics. It bridges the worlds of business and statistics and describes business analytics from a non-commercial standpoint. The authors demystify the main concepts and terminologies and give many examples of real-world applications. The first part of the book introduces business data and recent technologies that have promoted fact-based decision-making. The authors look at how business intelligence differs from business analytics. They also discuss the main components of a business analytics application and the various requirements for integrating business with analytics. The second part presents the technologies underlying business analytics: data mining and data analytics. The book helps you understand the key concepts and ideas behind data mining and shows how data mining has expanded into data analytics when considering new types of data such as network and text data. The third part explores business analytics in depth, covering customer, social, and operational analytics. Each chapter in this part incorporates hands-on projects based on publicly available data. Helping you make sound decisions based on hard data, this self-contained guide provides an integrated framework for data mining in business analytics. It takes you on a journey through this data-rich world, showing you how to deploy business analytics solutions in your organization. You can check out the book's website here.
The chapter starts with a positioning of this dissertation in the marketing discipline. It then provides a comparison of the two most popular methods for studying consumer preferences/choices, namely conjoint analysis and discrete choice experiments. Chapter 1 continues with a description of the context of discrete choice experiments. Subsequently, the research problems and the objectives ofthis dissertation are discussed. The chapter concludes with an outline of the organization of this dissertation. 1. 1 Positioning of the Dissertation During this century, increasing globalization and technological progress has forced companies to undergo rapid and dramatic changes-for some a threat, for others it offers new opportunities. Companies have to survive in a Darwinian marketplace where the principle of natural selection applies. Marketplace success goes to those companies that are able to produce marketable value, Le. , products and services that others are willing to purchase (Kotler 1997). Every company must be engaged in new-product development to create the new products customers want because competitors will do their best to supply them. Besides offering competitive advantages, new products usually lead to sales growth and stability. As household incomes increase and consumers become more selective, fmns need to know how consumers respond to different features and appeals. Successful products and services begin with a thorough understanding of consumer needs and wants. Stated otherwise, companies need to know about consumer preferences to manufacture tailor-made products, consumers are willing to buy.
High-Performance Computing (HPC) delivers higher computational performance to solve problems in science, engineering and finance. There are various HPC resources available for different needs, ranging from cloud computing- that can be used without much expertise and expense - to more tailored hardware, such as Field-Programmable Gate Arrays (FPGAs) or D-Wave's quantum computer systems. High-Performance Computing in Finance is the first book that provides a state-of-the-art introduction to HPC for finance, capturing both academically and practically relevant problems.
Like the preceding volumes, and met with a lively response, the present volume is collecting contributions stressed on methodology or successful industrial applications. The papers are classified under four main headings: sampling inspection, process quality control, data analysis and process capability studies and finally experimental design.
In the first part of this book bargaining experiments with different economic and ethical frames are investigated. The distributive principles and norms the subjects apply and their justifications for these principles are evaluated. The bargaining processes and the resulting agreements are analyzed. In the second part different bargaining theories are presented and the corresponding solutions are axiomatically characterized. A bargaining concept with goals that depend on economic and ethical features of the bargaining situation is introduced. Observations from the experimental data lead to the ideas for the axiomatic characterization of a bargaining solution with goals.
In 1991, a subcommittee of the Federal Committee on Statistical Methodology met to document the use of indirect estimators - that is, estimators which use data drawn from a domain or time different from the domain or time for which an estimate is required. This volume comprises the eight reports which describe the use of indirect estimators and they are based on case studies from a variety of federal programs. As a result, many researchers will find this book provides a valuable survey of how indirect estimators are used in practice and which addresses some of the pitfalls of these methods.
In order to obtain many of the classical results in the theory of statistical estimation, it is usual to impose regularity conditions on the distributions under consideration. In small sample and large sample theories of estimation there are well established sets of regularity conditions, and it is worth while to examine what may follow if any one of these regularity conditions fail to hold. "Non-regular estimation" literally means the theory of statistical estimation when some or other of the regularity conditions fail to hold. In this monograph, the authors present a systematic study of the meaning and implications of regularity conditions, and show how the relaxation of such conditions can often lead to surprising conclusions. Their emphasis is on considering small sample results and to show how pathological examples may be considered in this broader framework.
Eine speziell fur Wirtschafts- und Sozialwissenschaftler geeignete Einfuhrung in die Grundlagen der Statistik und deren computergestutzte Anwendung. Aus dem Inhalt: Datenerfassung und Datenmodifikation. Haufigkeitsverteilungen und deskriptive Statistiken. Explorative Datenanalyse. Kreuztabellen und Assoziationsmasse. Testverfahren. Korrelationsmasse. Streudiagramme. Regressionsanalyse. Trendanalysen und Kurvenanpassung. Zeitreihenanalyse. Faktorenanalyse. Clusteranalyse. Diskriminanzanalyse. Aufgaben."
This book deals with Business Analytics (BA) - an emerging area in modern business decision making. Business analytics is a data driven decision making approach that uses statistical and quantitative analysis along with data mining, management science, and fact-based data to measure past business performance to guide an organization in business planning and effective decision making. Business Analytics tools are also used to predict future business outcomes with the help of forecasting and predictive modeling. In this age of technology, massive amount of data are collected by companies. Successful companies use their data as an asset and use them for competitive advantage. Business Analytics is helping businesses in making informed business decisions and automating and optimizing business processes. Successful business analytics depends on the quality of data. Skilled analysts, who understand the technologies and their business, use business analytics tools as an organizational commitment to data-driven decision making.
This is an excerpt from the 4-volume dictionary of economics, a reference book which aims to define the subject of economics today. 1300 subject entries in the complete work cover the broad themes of economic theory. This extract concentrates on time series and statistics.
A new procedure for the maximum-likelihood estimation of dynamic econometric models with errors in both endogenous and exogenous variables is presented in this monograph. A complete analytical development of the expressions used in problems of estimation and verification of models in state-space form is presented. The results are useful in relation not only to the problem of errors in variables but also to any other possible econometric application of state-space formulations.
In each chapter of this volume some specific topics in the econometric analysis of time series data are studied. All topics have in common the statistical inference in linear models with correlated disturbances. The main aim of the study is to give a survey of new and old estimation techniques for regression models with disturbances that follow an autoregressive-moving average process. In the final chapter also several test strategies for discriminating between various types of autocorrelation are discussed. In nearly all chapters it is demonstrated how useful the simple geometric interpretation of the well-known ordinary least squares (OLS) method is. By applying these geometric concepts to linear spaces spanned by scalar stochastic variables, it emerges that well-known as well as new results can be derived in a simple geometric manner, sometimes without the limiting restrictions of the usual derivations, e. g. , the conditional normal distribution, the Kalman filter equations and the Cramer-Rao inequality. The outline of the book is as follows. In Chapter 2 attention is paid to a generalization of the well-known first order autocorrelation transformation of a linear regression model with disturbances that follow a first order Markov scheme. Firstly, the appropriate lower triangular transformation matrix is derived for the case that the disturbances follow a moving average process of order q (MA(q". It turns out that the calculations can be carried out either analytically or in a recursive manner.
In many branches of science relevant observations are taken sequentially over time. Bayesian Analysis of Time Series discusses how to use models that explain the probabilistic characteristics of these time series and then utilizes the Bayesian approach to make inferences about their parameters. This is done by taking the prior information and via Bayes theorem implementing Bayesian inferences of estimation, testing hypotheses, and prediction. The methods are demonstrated using both R and WinBUGS. The R package is primarily used to generate observations from a given time series model, while the WinBUGS packages allows one to perform a posterior analysis that provides a way to determine the characteristic of the posterior distribution of the unknown parameters. Features Presents a comprehensive introduction to the Bayesian analysis of time series. Gives many examples over a wide variety of fields including biology, agriculture, business, economics, sociology, and astronomy. Contains numerous exercises at the end of each chapter many of which use R and WinBUGS. Can be used in graduate courses in statistics and biostatistics, but is also appropriate for researchers, practitioners and consulting statisticians. About the author Lyle D. Broemeling, Ph.D., is Director of Broemeling and Associates Inc., and is a consulting biostatistician. He has been involved with academic health science centers for about 20 years and has taught and been a consultant at the University of Texas Medical Branch in Galveston, The University of Texas MD Anderson Cancer Center and the University of Texas School of Public Health. His main interest is in developing Bayesian methods for use in medical and biological problems and in authoring textbooks in statistics. His previous books for Chapman & Hall/CRC include Bayesian Biostatistics and Diagnostic Medicine, and Bayesian Methods for Agreement.
This is a collection of papers by leading theorist Robert A Pollak - four of them previously unpublished - exploring the theory of the cost of living index. The unifying theme of these papers is that, when suitably elaborated, the theory of the cost of living index provides principled answers to many of the practical problems that arise in constructing consumer price indexes. In addition to Pollak's classic paper The Theory of the Cost of Living Index, the volume includes papers on subindexes, the intertemporal cost of living index, welfare comparisons and equivalence scales, the social cost of living index, the treatment of `quality', and consumer durables in the cost of living index.
Do economics and statistics succeed in explaining human social behaviour? To answer this question. Leland Gerson Neuberg studies some pioneering controlled social experiments. Starting in the late 1960s, economists and statisticians sought to improve social policy formation with random assignment experiments such as those that provided income guarantees in the form of a negative income tax. This book explores anomalies in the conceptual basis of such experiments and in the foundations of statistics and economics more generally. Scientific inquiry always faces certain philosophical problems. Controlled experiments of human social behaviour, however, cannot avoid some methodological difficulties not evident in physical science experiments. Drawing upon several examples, the author argues that methodological anomalies prevent microeconomics and statistics from explaining human social behaviour as coherently as the physical sciences explain nature. He concludes that controlled social experiments are a frequently overrated tool for social policy improvement.
The methodological needs of environmental studies are unique in the breadth of research questions that can be posed, calling for a textbook that covers a broad swath of approaches to conducting research with potentially many different kinds of evidence. Fully updated to address new developments such as the effects of the internet, recent trends in the use of computers, remote sensing, and large data sets, this new edition of Research Methods for Environmental Studies is written specifically for social science-based research into the environment. This revised edition contains new chapters on coding, focus groups, and an extended treatment of hypothesis testing. The textbook covers the best-practice research methods most used to study the environment and its connections to societal and economic activities and objectives. Over five key parts, Kanazawa introduces quantitative and qualitative approaches, mixed methods, and the special requirements of interdisciplinary research, emphasizing that methodological practice should be tailored to the specific needs of the project. Within these parts, detailed coverage is provided on key topics including the identification of a research project, hypothesis testing, spatial analysis, the case study method, ethnographic approaches, discourse analysis, mixed methods, survey and interview techniques, focus groups, and ethical issues in environmental research. Drawing on a variety of extended and updated examples to encourage problem-based learning and fully addressing the challenges associated with interdisciplinary investigation, this book will be an essential resource for students embarking on courses exploring research methods in environmental studies.
Learn by doing with this user-friendly introduction to time series data analysis in R. This book explores the intricacies of managing and cleaning time series data of different sizes, scales and granularity, data preparation for analysis and visualization, and different approaches to classical and machine learning time series modeling and forecasting. A range of pedagogical features support students, including end-of-chapter exercises, problems, quizzes and case studies. The case studies are designed to stretch the learner, introducing larger data sets, enhanced data management skills, and R packages and functions appropriate for real-world data analysis. On top of providing commented R programs and data sets, the book's companion website offers extra case studies, lecture slides, videos and exercise solutions. Accessible to those with a basic background in statistics and probability, this is an ideal hands-on text for undergraduate and graduate students, as well as researchers in data-rich disciplines
This book aims to help the reader better understand the importance of data analysis in project management. Moreover, it provides guidance by showing tools, methods, techniques and lessons learned on how to better utilize the data gathered from the projects. First and foremost, insight into the bridge between data analytics and project management aids practitioners looking for ways to maximize the practical value of data procured. The book equips organizations with the know-how necessary to adapt to a changing workplace dynamic through key lessons learned from past ventures. The book's integrated approach to investigating both fields enhances the value of research findings.
The quantitative modeling of complex systems of interacting risks is a fairly recent development in the financial and insurance industries. Over the past decades, there has been tremendous innovation and development in the actuarial field. In addition to undertaking mortality and longevity risks in traditional life and annuity products, insurers face unprecedented financial risks since the introduction of equity-linking insurance in 1960s. As the industry moves into the new territory of managing many intertwined financial and insurance risks, non-traditional problems and challenges arise, presenting great opportunities for technology development. Today's computational power and technology make it possible for the life insurance industry to develop highly sophisticated models, which were impossible just a decade ago. Nonetheless, as more industrial practices and regulations move towards dependence on stochastic models, the demand for computational power continues to grow. While the industry continues to rely heavily on hardware innovations, trying to make brute force methods faster and more palatable, we are approaching a crossroads about how to proceed. An Introduction to Computational Risk Management of Equity-Linked Insurance provides a resource for students and entry-level professionals to understand the fundamentals of industrial modeling practice, but also to give a glimpse of software methodologies for modeling and computational efficiency. Features Provides a comprehensive and self-contained introduction to quantitative risk management of equity-linked insurance with exercises and programming samples Includes a collection of mathematical formulations of risk management problems presenting opportunities and challenges to applied mathematicians Summarizes state-of-arts computational techniques for risk management professionals Bridges the gap between the latest developments in finance and actuarial literature and the practice of risk management for investment-combined life insurance Gives a comprehensive review of both Monte Carlo simulation methods and non-simulation numerical methods Runhuan Feng is an Associate Professor of Mathematics and the Director of Actuarial Science at the University of Illinois at Urbana-Champaign. He is a Fellow of the Society of Actuaries and a Chartered Enterprise Risk Analyst. He is a Helen Corley Petit Professorial Scholar and the State Farm Companies Foundation Scholar in Actuarial Science. Runhuan received a Ph.D. degree in Actuarial Science from the University of Waterloo, Canada. Prior to joining Illinois, he held a tenure-track position at the University of Wisconsin-Milwaukee, where he was named a Research Fellow. Runhuan received numerous grants and research contracts from the Actuarial Foundation and the Society of Actuaries in the past. He has published a series of papers on top-tier actuarial and applied probability journals on stochastic analytic approaches in risk theory and quantitative risk management of equity-linked insurance. Over the recent years, he has dedicated his efforts to developing computational methods for managing market innovations in areas of investment combined insurance and retirement planning.
Drawing on a lifetime of distinguished work in economic research and policy-making, Andrew Kamarck details how his profession can more usefully analyze and solve economic problems by changing its basic approach to research.Kamarck contends that most economists today strive for a mathematical precision in their work that neither stems from nor leads to an accurate view of economic reality. He develops elegant critiques of key areas of economic analysis based on appreciation of scientific method and knowledge of the limitations of economic data. Concepts such as employment, market, and money supply must be seen as loose, not exact. Measurement of national income becomes highly problematic when raking into account such factors as the "underground economy" and currency differences. World trade analysis is based on inconsistent and often inaccurate measurements. Subtle realities of the individual, social, and political worlds render largely ineffective both large-scale macroeconomics models and micro models of the consumer and the firm. Fashionable cost-benefit analysis must be recognized as inherently imprecise. Capital and investment in developing countries tend to be measured in easy but irrelevant ways.Kamarck concludes with a call for economists to involve themselves in data collection, to insist on more accurate and reliable data sources, to do analysis within the context of experience, and to take a realistic, incremental approach to policy-making. Kamarck's concerns are shared by many economists, and his eloquent presentation will be essential reading for his colleagues and for those who make use of economic research.
This book provides an introduction to R programming and a summary of financial mathematics. It is not always easy for graduate students to grasp an overview of the theory of finance in an abstract form. For newcomers to the finance industry, it is not always obvious how to apply the abstract theory to the real financial data they encounter. Introducing finance theory alongside numerical applications makes it easier to grasp the subject. Popular programming languages like C++, which are used in many financial applications are meant for general-purpose requirements. They are good for implementing large-scale distributed systems for simultaneously valuing many financial contracts, but they are not as suitable for small-scale ad-hoc analysis or exploration of financial data. The R programming language overcomes this problem. R can be used for numerical applications including statistical analysis, time series analysis, numerical methods for pricing financial contracts, etc. This book provides an overview of financial mathematics with numerous examples numerically illustrated using the R programming language.
Das Buch behandelt die Anlage und Auswertung von Versuchen f r
stetigen normalverteilten Response, f r stetigen Response auf der
Basis von Rangdaten, f r kategorialen, insb. bin ren Response auf
der Basis loglinearer Modelle und f r kategorialen korrelierten
Response auf der Basis von Marginalmodellen und symmetrischen
Regressionsmodellen.
The Who, What, and Where of America is designed to provide a sampling of key demographic information. It covers the United States, every state, each metropolitan statistical area, and all the counties and cities with a population of 20,000 or more. Who: Age, Race and Ethnicity, and Household Structure What: Education, Employment, and Income Where: Migration, Housing, and Transportation Each part is preceded by highlights and ranking tables that show how areas diverge from the national norm. These research aids are invaluable for understanding data from the ACS and for highlighting what it tells us about who we are, what we do, and where we live. Each topic is divided into four tables revealing the results of the data collected from different types of geographic areas in the United States, generally with populations greater than 20,000. ·Table A. States ·Table B. Counties ·Table C. Metropolitan Areas ·Table D. Cities In this edition, you will find social and economic estimates on the ways American communities are changing with regard to the following: ·Age and race ·Health care coverage ·Marital history ·Education attainment ·Income and occupation ·Commute time to work ·Employment status ·Home values and monthly costs ·Veteran status ·Size of home or rental unit This title is the latest in the County and City Extra Series of publications from Bernan Press. Other titles include County and City Extra, County and City Extra: Special Decennial Census Edition, and Places, Towns, and Townships.
A fair question to ask of an advocate of subjective Bayesianism (which the author is) is "how would you model uncertainty?" In this book, the author writes about how he has done it using real problems from the past, and offers additional comments about the context in which he was working. |
You may like...
Artificial Intelligence of Things for…
Rajeev Kumar Gupta, Arti Jain, …
Hardcover
R6,683
Discovery Miles 66 830
Semantic Service Provisioning
Dominik Kuropka, Peter Troeger, …
Hardcover
R1,421
Discovery Miles 14 210
Earthquakes Induced by Underground…
Rodolfo Console, Alexei Nikolaev
Hardcover
R2,464
Discovery Miles 24 640
Microbial Surfaces - Structure…
Terri A. Camesano, Charlene Mello
Hardcover
R1,794
Discovery Miles 17 940
Paleoclimate and Evolution, with…
Elizabeth S. Vrba, George H. Denton, …
Hardcover
R3,356
Discovery Miles 33 560
Stochastic Averaging and Stochastic…
Shu-Jun Liu, Miroslav Krstic
Hardcover
R2,666
Discovery Miles 26 660
|