![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > Economic statistics
This book provides an introduction to the use of statistical concepts and methods to model and analyze financial data. The ten chapters of the book fall naturally into three sections. Chapters 1 to 3 cover some basic concepts of finance, focusing on the properties of returns on an asset. Chapters 4 through 6 cover aspects of portfolio theory and the methods of estimation needed to implement that theory. The remainder of the book, Chapters 7 through 10, discusses several models for financial data, along with the implications of those models for portfolio theory and for understanding the properties of return data. The audience for the book is students majoring in Statistics and Economics as well as in quantitative fields such as Mathematics and Engineering. Readers are assumed to have some background in statistical methods along with courses in multivariate calculus and linear algebra.
Most governments in today's market economies spend significant sums of money on labour market programmes. The declared aims of these programmes are to increase the re-employment chances of the unemployed. This book investigates which active labour market programmes in Poland are value for money and which are not. To this end, modern statistical methods are applied to both macro- and microeconomic data. It is shown that training programmes increase, whereas job subsidies and public works decrease the re-employment opportunities of the unemployed. In general, all active labour market policy effects are larger in absolute size for men than for women. By surveying previous studies in the field and outlining the major statistical approaches that are employed in the evaluation literature, the book can be of help to any student interested in programme evaluation irrespective of the paticular programme or country concerned.
nd Selected papers presented at the 22 Annual Conference of the German Classification Society GfKI (Gesellschaft fUr Klassifikation), held at the Uni- versity of Dresden in 1998, are contained in this volume of "Studies in Clas- sification, Data Analysis, and Knowledge Organization" . One aim of GfKI was to provide a platform for a discussion of results con- cerning a challenge of growing importance that could be labeled as "Classi- fication in the Information Age" and to support interdisciplinary activities from research and applications that incorporate directions of this kind. As could be expected, the largest share of papers is closely related to classi- fication and-in the broadest sense-data analysis and statistics. Additionally, besides contributions dealing with questions arising from the usage of new media and the internet, applications in, e.g., (in alphabetical order) archeolo- gy, bioinformatics, economics, environment, and health have been reported. As always, an unambiguous assignment of results to single topics is some- times difficult, thus, from more than 130 presentations offered within the scientific program 65 papers are grouped into the following chapters and subchapters: * Plenary and Semi Plenary Presentations - Classification and Information - Finance and Risk * Classification and Related Aspects of Data Analysis and Learning - Classification, Data Analysis, and Statistics - Conceptual Analysis and Learning * Usage of New Media and the Internet - Information Systems, Multimedia, and WWW - Navigation and Classification on the Internet and Virtual Univ- sities * Applications in Economics
Summarizes the latest developments and techniques in the field and highlights areas such as sample surveys, nonparametric analysis, hypothesis testing, time series analysis, Bayesian inference, and distribution theory for current applications in statistics, economics, medicine, biology, engineering, sociology, psychology, and information technology. Containing more than 800 contemporary references to facilitate further study, the Handbook of Applied Econometrics and Statistical Inference is an in-depth guide for applied statisticians, econometricians, economists, sociologists, psychologists, data analysts, biometricians, medical researchers, and upper-level undergraduate and graduate-level students in these disciplines.
The chapter starts with a positioning of this dissertation in the marketing discipline. It then provides a comparison of the two most popular methods for studying consumer preferences/choices, namely conjoint analysis and discrete choice experiments. Chapter 1 continues with a description of the context of discrete choice experiments. Subsequently, the research problems and the objectives ofthis dissertation are discussed. The chapter concludes with an outline of the organization of this dissertation. 1. 1 Positioning of the Dissertation During this century, increasing globalization and technological progress has forced companies to undergo rapid and dramatic changes-for some a threat, for others it offers new opportunities. Companies have to survive in a Darwinian marketplace where the principle of natural selection applies. Marketplace success goes to those companies that are able to produce marketable value, Le. , products and services that others are willing to purchase (Kotler 1997). Every company must be engaged in new-product development to create the new products customers want because competitors will do their best to supply them. Besides offering competitive advantages, new products usually lead to sales growth and stability. As household incomes increase and consumers become more selective, fmns need to know how consumers respond to different features and appeals. Successful products and services begin with a thorough understanding of consumer needs and wants. Stated otherwise, companies need to know about consumer preferences to manufacture tailor-made products, consumers are willing to buy.
Like the preceding volumes, and met with a lively response, the present volume is collecting contributions stressed on methodology or successful industrial applications. The papers are classified under four main headings: sampling inspection, process quality control, data analysis and process capability studies and finally experimental design.
Assuming no prior knowledge or technical skills, Getting Started with Business Analytics: Insightful Decision-Making explores the contents, capabilities, and applications of business analytics. It bridges the worlds of business and statistics and describes business analytics from a non-commercial standpoint. The authors demystify the main concepts and terminologies and give many examples of real-world applications. The first part of the book introduces business data and recent technologies that have promoted fact-based decision-making. The authors look at how business intelligence differs from business analytics. They also discuss the main components of a business analytics application and the various requirements for integrating business with analytics. The second part presents the technologies underlying business analytics: data mining and data analytics. The book helps you understand the key concepts and ideas behind data mining and shows how data mining has expanded into data analytics when considering new types of data such as network and text data. The third part explores business analytics in depth, covering customer, social, and operational analytics. Each chapter in this part incorporates hands-on projects based on publicly available data. Helping you make sound decisions based on hard data, this self-contained guide provides an integrated framework for data mining in business analytics. It takes you on a journey through this data-rich world, showing you how to deploy business analytics solutions in your organization. You can check out the book's website here.
In the first part of this book bargaining experiments with different economic and ethical frames are investigated. The distributive principles and norms the subjects apply and their justifications for these principles are evaluated. The bargaining processes and the resulting agreements are analyzed. In the second part different bargaining theories are presented and the corresponding solutions are axiomatically characterized. A bargaining concept with goals that depend on economic and ethical features of the bargaining situation is introduced. Observations from the experimental data lead to the ideas for the axiomatic characterization of a bargaining solution with goals.
In 1991, a subcommittee of the Federal Committee on Statistical Methodology met to document the use of indirect estimators - that is, estimators which use data drawn from a domain or time different from the domain or time for which an estimate is required. This volume comprises the eight reports which describe the use of indirect estimators and they are based on case studies from a variety of federal programs. As a result, many researchers will find this book provides a valuable survey of how indirect estimators are used in practice and which addresses some of the pitfalls of these methods.
In order to obtain many of the classical results in the theory of statistical estimation, it is usual to impose regularity conditions on the distributions under consideration. In small sample and large sample theories of estimation there are well established sets of regularity conditions, and it is worth while to examine what may follow if any one of these regularity conditions fail to hold. "Non-regular estimation" literally means the theory of statistical estimation when some or other of the regularity conditions fail to hold. In this monograph, the authors present a systematic study of the meaning and implications of regularity conditions, and show how the relaxation of such conditions can often lead to surprising conclusions. Their emphasis is on considering small sample results and to show how pathological examples may be considered in this broader framework.
High-Performance Computing (HPC) delivers higher computational performance to solve problems in science, engineering and finance. There are various HPC resources available for different needs, ranging from cloud computing- that can be used without much expertise and expense - to more tailored hardware, such as Field-Programmable Gate Arrays (FPGAs) or D-Wave's quantum computer systems. High-Performance Computing in Finance is the first book that provides a state-of-the-art introduction to HPC for finance, capturing both academically and practically relevant problems.
This pioneering work gives an insight into the daily work of the national statistical institutions of the old command economies in their endeavour to meet the challenge of transition to a market-oriented system of labour statistics variables and indicators. Distinct from any other publication with statistics on Central and East European countries and the former Soviet Union, it reveals why and how new statistics are being collected and what still has to be done in order to make their national data compatible with the rest of the world. The authors discuss the problems involved in the measurement of employment (in both the state and the private sectors) and unemployment, the collection of reliable wage statistics, and the development of new economic classifications in line with those internationally recognized and adopted. They also make a number of recommendations on how to adapt ILO international standards in order to meet the above needs.
Eine speziell fur Wirtschafts- und Sozialwissenschaftler geeignete Einfuhrung in die Grundlagen der Statistik und deren computergestutzte Anwendung. Aus dem Inhalt: Datenerfassung und Datenmodifikation. Haufigkeitsverteilungen und deskriptive Statistiken. Explorative Datenanalyse. Kreuztabellen und Assoziationsmasse. Testverfahren. Korrelationsmasse. Streudiagramme. Regressionsanalyse. Trendanalysen und Kurvenanpassung. Zeitreihenanalyse. Faktorenanalyse. Clusteranalyse. Diskriminanzanalyse. Aufgaben."
This is an excerpt from the 4-volume dictionary of economics, a reference book which aims to define the subject of economics today. 1300 subject entries in the complete work cover the broad themes of economic theory. This extract concentrates on time series and statistics.
A new procedure for the maximum-likelihood estimation of dynamic econometric models with errors in both endogenous and exogenous variables is presented in this monograph. A complete analytical development of the expressions used in problems of estimation and verification of models in state-space form is presented. The results are useful in relation not only to the problem of errors in variables but also to any other possible econometric application of state-space formulations.
In each chapter of this volume some specific topics in the econometric analysis of time series data are studied. All topics have in common the statistical inference in linear models with correlated disturbances. The main aim of the study is to give a survey of new and old estimation techniques for regression models with disturbances that follow an autoregressive-moving average process. In the final chapter also several test strategies for discriminating between various types of autocorrelation are discussed. In nearly all chapters it is demonstrated how useful the simple geometric interpretation of the well-known ordinary least squares (OLS) method is. By applying these geometric concepts to linear spaces spanned by scalar stochastic variables, it emerges that well-known as well as new results can be derived in a simple geometric manner, sometimes without the limiting restrictions of the usual derivations, e. g. , the conditional normal distribution, the Kalman filter equations and the Cramer-Rao inequality. The outline of the book is as follows. In Chapter 2 attention is paid to a generalization of the well-known first order autocorrelation transformation of a linear regression model with disturbances that follow a first order Markov scheme. Firstly, the appropriate lower triangular transformation matrix is derived for the case that the disturbances follow a moving average process of order q (MA(q". It turns out that the calculations can be carried out either analytically or in a recursive manner.
This is a collection of papers by leading theorist Robert A Pollak - four of them previously unpublished - exploring the theory of the cost of living index. The unifying theme of these papers is that, when suitably elaborated, the theory of the cost of living index provides principled answers to many of the practical problems that arise in constructing consumer price indexes. In addition to Pollak's classic paper The Theory of the Cost of Living Index, the volume includes papers on subindexes, the intertemporal cost of living index, welfare comparisons and equivalence scales, the social cost of living index, the treatment of `quality', and consumer durables in the cost of living index.
MAKING HARD DECISIONS WITH DECISIONTOOLS (R) is a new edition of Bob Clemen's best-selling title, MAKING HARD DECISIONS. This straightforward book teaches the fundamental ideas of decision analysis, without an overly technical explanation of the mathematics used in decision analysis. This new version incorporates and implements the powerful DecisionTools (R) software by Palisade Corporation, the world's leading toolkit for risk and decision analysis. At the end of each chapter, topics are illustrated with step-by-step instructions for DecisionTools (R). This new version makes the text more useful and relevant to students in business and engineering.
In many branches of science relevant observations are taken sequentially over time. Bayesian Analysis of Time Series discusses how to use models that explain the probabilistic characteristics of these time series and then utilizes the Bayesian approach to make inferences about their parameters. This is done by taking the prior information and via Bayes theorem implementing Bayesian inferences of estimation, testing hypotheses, and prediction. The methods are demonstrated using both R and WinBUGS. The R package is primarily used to generate observations from a given time series model, while the WinBUGS packages allows one to perform a posterior analysis that provides a way to determine the characteristic of the posterior distribution of the unknown parameters. Features Presents a comprehensive introduction to the Bayesian analysis of time series. Gives many examples over a wide variety of fields including biology, agriculture, business, economics, sociology, and astronomy. Contains numerous exercises at the end of each chapter many of which use R and WinBUGS. Can be used in graduate courses in statistics and biostatistics, but is also appropriate for researchers, practitioners and consulting statisticians. About the author Lyle D. Broemeling, Ph.D., is Director of Broemeling and Associates Inc., and is a consulting biostatistician. He has been involved with academic health science centers for about 20 years and has taught and been a consultant at the University of Texas Medical Branch in Galveston, The University of Texas MD Anderson Cancer Center and the University of Texas School of Public Health. His main interest is in developing Bayesian methods for use in medical and biological problems and in authoring textbooks in statistics. His previous books for Chapman & Hall/CRC include Bayesian Biostatistics and Diagnostic Medicine, and Bayesian Methods for Agreement.
Drawing on a lifetime of distinguished work in economic research and policy-making, Andrew Kamarck details how his profession can more usefully analyze and solve economic problems by changing its basic approach to research.Kamarck contends that most economists today strive for a mathematical precision in their work that neither stems from nor leads to an accurate view of economic reality. He develops elegant critiques of key areas of economic analysis based on appreciation of scientific method and knowledge of the limitations of economic data. Concepts such as employment, market, and money supply must be seen as loose, not exact. Measurement of national income becomes highly problematic when raking into account such factors as the "underground economy" and currency differences. World trade analysis is based on inconsistent and often inaccurate measurements. Subtle realities of the individual, social, and political worlds render largely ineffective both large-scale macroeconomics models and micro models of the consumer and the firm. Fashionable cost-benefit analysis must be recognized as inherently imprecise. Capital and investment in developing countries tend to be measured in easy but irrelevant ways.Kamarck concludes with a call for economists to involve themselves in data collection, to insist on more accurate and reliable data sources, to do analysis within the context of experience, and to take a realistic, incremental approach to policy-making. Kamarck's concerns are shared by many economists, and his eloquent presentation will be essential reading for his colleagues and for those who make use of economic research.
This book aims to help the reader better understand the importance of data analysis in project management. Moreover, it provides guidance by showing tools, methods, techniques and lessons learned on how to better utilize the data gathered from the projects. First and foremost, insight into the bridge between data analytics and project management aids practitioners looking for ways to maximize the practical value of data procured. The book equips organizations with the know-how necessary to adapt to a changing workplace dynamic through key lessons learned from past ventures. The book's integrated approach to investigating both fields enhances the value of research findings.
Das Buch behandelt die Anlage und Auswertung von Versuchen f r
stetigen normalverteilten Response, f r stetigen Response auf der
Basis von Rangdaten, f r kategorialen, insb. bin ren Response auf
der Basis loglinearer Modelle und f r kategorialen korrelierten
Response auf der Basis von Marginalmodellen und symmetrischen
Regressionsmodellen.
The quantitative modeling of complex systems of interacting risks is a fairly recent development in the financial and insurance industries. Over the past decades, there has been tremendous innovation and development in the actuarial field. In addition to undertaking mortality and longevity risks in traditional life and annuity products, insurers face unprecedented financial risks since the introduction of equity-linking insurance in 1960s. As the industry moves into the new territory of managing many intertwined financial and insurance risks, non-traditional problems and challenges arise, presenting great opportunities for technology development. Today's computational power and technology make it possible for the life insurance industry to develop highly sophisticated models, which were impossible just a decade ago. Nonetheless, as more industrial practices and regulations move towards dependence on stochastic models, the demand for computational power continues to grow. While the industry continues to rely heavily on hardware innovations, trying to make brute force methods faster and more palatable, we are approaching a crossroads about how to proceed. An Introduction to Computational Risk Management of Equity-Linked Insurance provides a resource for students and entry-level professionals to understand the fundamentals of industrial modeling practice, but also to give a glimpse of software methodologies for modeling and computational efficiency. Features Provides a comprehensive and self-contained introduction to quantitative risk management of equity-linked insurance with exercises and programming samples Includes a collection of mathematical formulations of risk management problems presenting opportunities and challenges to applied mathematicians Summarizes state-of-arts computational techniques for risk management professionals Bridges the gap between the latest developments in finance and actuarial literature and the practice of risk management for investment-combined life insurance Gives a comprehensive review of both Monte Carlo simulation methods and non-simulation numerical methods Runhuan Feng is an Associate Professor of Mathematics and the Director of Actuarial Science at the University of Illinois at Urbana-Champaign. He is a Fellow of the Society of Actuaries and a Chartered Enterprise Risk Analyst. He is a Helen Corley Petit Professorial Scholar and the State Farm Companies Foundation Scholar in Actuarial Science. Runhuan received a Ph.D. degree in Actuarial Science from the University of Waterloo, Canada. Prior to joining Illinois, he held a tenure-track position at the University of Wisconsin-Milwaukee, where he was named a Research Fellow. Runhuan received numerous grants and research contracts from the Actuarial Foundation and the Society of Actuaries in the past. He has published a series of papers on top-tier actuarial and applied probability journals on stochastic analytic approaches in risk theory and quantitative risk management of equity-linked insurance. Over the recent years, he has dedicated his efforts to developing computational methods for managing market innovations in areas of investment combined insurance and retirement planning.
Technical Analysis of Stock Trends helps investors make smart, profitable trading decisions by providing proven long- and short-term stock trend analysis. It gets right to the heart of effective technical trading concepts, explaining technical theory such as The Dow Theory, reversal patterns, consolidation formations, trends and channels, technical analysis of commodity charts, and advances in investment technology. It also includes a comprehensive guide to trading tactics from long and short goals, stock selection, charting, low and high risk, trend recognition tools, balancing and diversifying the stock portfolio, application of capital, and risk management. This updated new edition includes patterns and modifiable charts that are tighter and more illustrative. Expanded material is also included on Pragmatic Portfolio Theory as a more elegant alternative to Modern Portfolio Theory; and a newer, simpler, and more powerful alternative to Dow Theory is presented. This book is the perfect introduction, giving you the knowledge and wisdom to craft long-term success. |
You may like...
Unlocking the potential of youth…
Organisation For Economic Co-Operation And Development Development Centre
Paperback
R652
Discovery Miles 6 520
Australian services trade in the global…
Organisation for Economic Cooperation and Development
Paperback
R1,462
Discovery Miles 14 620
Foreign Direct Investment And The Law…
Debbie Collier, Tracy Gutuza
Paperback
Perspectives on global development 2021…
Organisation For Economic Co-Operation And Development Development Centre
Paperback
R1,798
Discovery Miles 17 980
Handbook of Hormones - Comparative…
Hironori Ando, Kazuyoshi Ukena, …
Paperback
R5,129
Discovery Miles 51 290
DNA Damage and Double Strand Breaks Part…
Fuyuhiko Tamanoi, Kenichi Yoshikawa
Hardcover
R3,440
Discovery Miles 34 400
Advances in Microbial Physiology, Volume…
Robert K. Poole, David J. Kelly
Hardcover
R3,933
Discovery Miles 39 330
|