Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Business & Economics > Economics > Econometrics > Economic statistics
Economic and financial time series feature important seasonal fluctuations. Despite their regular and predictable patterns over the year, month or week, they pose many challenges to economists and econometricians. This book provides a thorough review of the recent developments in the econometric analysis of seasonal time series. It is designed for an audience of specialists in economic time series analysis and advanced graduate students. It is the most comprehensive and balanced treatment of the subject since the mid-1980s.
Statistical Programming in SAS Second Edition provides a foundation for programming to implement statistical solutions using SAS, a system that has been used to solve data analytic problems for more than 40 years. The author includes motivating examples to inspire readers to generate programming solutions. Upper-level undergraduates, beginning graduate students, and professionals involved in generating programming solutions for data-analytic problems will benefit from this book. The ideal background for a reader is some background in regression modeling and introductory experience with computer programming. The coverage of statistical programming in the second edition includes Getting data into the SAS system, engineering new features, and formatting variables Writing readable and well-documented code Structuring, implementing, and debugging programs that are well documented Creating solutions to novel problems Combining data sources, extracting parts of data sets, and reshaping data sets as needed for other analyses Generating general solutions using macros Customizing output Producing insight-inspiring data visualizations Parsing, processing, and analyzing text Programming solutions using matrices and connecting to R Processing text Programming with matrices Connecting SAS with R Covering topics that are part of both base and certification exams.
This book provides an introduction to the use of statistical concepts and methods to model and analyze financial data. The ten chapters of the book fall naturally into three sections. Chapters 1 to 3 cover some basic concepts of finance, focusing on the properties of returns on an asset. Chapters 4 through 6 cover aspects of portfolio theory and the methods of estimation needed to implement that theory. The remainder of the book, Chapters 7 through 10, discusses several models for financial data, along with the implications of those models for portfolio theory and for understanding the properties of return data. The audience for the book is students majoring in Statistics and Economics as well as in quantitative fields such as Mathematics and Engineering. Readers are assumed to have some background in statistical methods along with courses in multivariate calculus and linear algebra.
Originally published in 1987. This collection of original papers deals with various issues of specification in the context of the linear statistical model. The volume honours the early econometric work of Donald Cochrane, late Dean of Economics and Politics at Monash University in Australia. The chapters focus on problems associated with autocorrelation of the error term in the linear regression model and include appraisals of early work on this topic by Cochrane and Orcutt. The book includes an extensive survey of autocorrelation tests; some exact finite-sample tests; and some issues in preliminary test estimation. A wide range of other specification issues is discussed, including the implications of random regressors for Bayesian prediction; modelling with joint conditional probability functions; and results from duality theory. There is a major survey chapter dealing with specification tests for non-nested models, and some of the applications discussed by the contributors deal with the British National Accounts and with Australian financial and housing markets.
Statistics for Business is meant as a textbook for students in business, computer science, bioengineering, environmental technology, and mathematics. In recent years, business statistics is used widely for decision making in business endeavours. It emphasizes statistical applications, statistical model building, and determining the manual solution methods. Special Features: This text is prepared based on "self-taught" method. For most of the methods, the required algorithm is clearly explained using flow-charting methodology. More than 200 solved problems provided. More than 175 end-of-chapter exercises with answers are provided. This allows teachers ample flexibility in adopting the textbook to their individual class plans. This textbook is meant to for beginners and advanced learners as a text in Statistics for Business or Applied Statistics for undergraduate and graduate students.
Given the huge amount of information in the internet and in practically every domain of knowledge that we are facing today, knowledge discovery calls for automation. The book deals with methods from classification and data analysis that respond effectively to this rapidly growing challenge. The interested reader will find new methodological insights as well as applications in economics, management science, finance, and marketing, and in pattern recognition, biology, health, and archaeology.
This book includes many of the papers presented at the 6th International workshop on Model Oriented Data Analysis held in June 2001. This series began in March 1987 with a meeting on the Wartburg near Eisenach (at that time in the GDR). The next four meetings were in 1990 (St Kyrik monastery, Bulgaria), 1992 (Petrodvorets, St Petersburg, Russia), 1995 (Spetses, Greece) and 1998 (Marseilles, France). Initially the main purpose of these workshops was to bring together leading scientists from 'Eastern' and 'Western' Europe for the exchange of ideas in theoretical and applied statistics, with special emphasis on experimental design. Now that the sep aration between East and West is much less rigid, this exchange has, in principle, become much easier. However, it is still important to provide opportunities for this interaction. MODA meetings are celebrated for their friendly atmosphere. Indeed, dis cussions between young and senior scientists at these meetings have resulted in several fruitful long-term collaborations. This intellectually stimulating atmosphere is achieved by limiting the number of participants to around eighty, by the choice of a location in which communal living is encour aged and, of course, through the careful scientific direction provided by the Programme Committee. It is a tradition of these meetings to provide low cost accommodation, low fees and financial support for the travel of young and Eastern participants. This is only possible through the help of sponsors and outside financial support was again important for the success of the meeting."
In the 1920's, Walter Shewhart visualized that the marriage of statistical methods and manufacturing processes would produce reliable and consistent quality products. Shewhart (1931) conceived the idea of statistical process control (SPC) and developed the well-known and appropriately named Shewhart control chart. However, from the 1930s to the 1990s, literature on SPC schemes have been "captured" by the Shewhart paradigm of normality, independence and homogeneous variance. When in fact, the problems facing today's industries are more inconsistent than those faced by Shewhart in the 1930s. As a result of the advances in machine and sensor technology, process data can often be collected on-line. In this situation, the process observations that result from data collection activities will frequently not be serially independent, but autocorrelated. Autocorrelation has a significant impact on a control chart: the process may not exhibit a state of statistical control when in fact, it is in control. As the prevalence of this type of data is expected to increase in industry (Hahn 1989), so does the need to control and monitor it. Equivalently, literature has reflected this trend, and research in the area of SPC with autocorrelated data continues so that effective methods of handling correlated data are available. This type of data regularly occurs in the chemical and process industries, and is pervasive in computer-integrated manufacturing environments, clinical laboratory settings and in the majority of SPC applications across various manufacturing and service industries (Alwan 1991).
The most widely used statistical method in seasonal adjustment is without doubt that implemented in the X-11 Variant of the Census Method II Seasonal Adjustment Program. Developed at the US Bureau of the Census in the 1950's and 1960's, this computer program has undergone numerous modifications and improvements, leading especially to the X-11-ARIMA software packages in 1975 and 1988 and X-12-ARIMA, the first beta version of which is dated 1998. While these software packages integrate, to varying degrees, parametric methods, and especially the ARIMA models popularized by Box and Jenkins, they remain in essence very close to the initial X-11 method, and it is this "core" that Seasonal Adjustment with the X-11 Method focuses on. With a Preface by Allan Young, the authors document the seasonal adjustment method implemented in the X-11 based software. It will be an important reference for government agencies, macroeconomists, and other serious users of economic data. After some historical notes, the authors outline the X-11 methodology. One chapter is devoted to the study of moving averages with an emphasis on those used by X-11. Readers will also find a complete example of seasonal adjustment, and have a detailed picture of all the calculations. The linear regression models used for trading-day effects and the process of detecting and correcting extreme values are studied in the example. The estimation of the Easter effect is dealt with in a separate chapter insofar as the models used in X-11-ARIMA and X-12-ARIMA are appreciably different. Dominique Ladiray is an Administrateur at the French Institut National de la Statistique et des Etudes Economiques. He is also a Professor at the Ecole Nationale de la Statistique et de l'Administration Economique, and at the Ecole Nationale de la Statistique et de l'Analyse de l'Information. He currently works on short-term economic analysis. Benoît Quenneville is a methodologist with Statistics Canada Time Series Research and Analysis Centre. He holds a Ph.D. from the University of Western Ontario. His research interests are in time series analysis with an emphasis on official statistics.
This volume contains revised versions of selected papers presented dur ing the 23rd Annual Conference of the German Classification Society GfKl (Gesellschaft fiir Klassifikation). The conference took place at the Univer sity of Bielefeld (Germany) in March 1999 under the title "Classification and Information Processing at the Turn of the Millennium". Researchers and practitioners - interested in data analysis, classification, and information processing in the broad sense, including computer science, multimedia, WWW, knowledge discovery, and data mining as well as spe cial application areas such as (in alphabetical order) biology, finance, genome analysis, marketing, medicine, public health, and text analysis - had the op portunity to discuss recent developments and to establish cross-disciplinary cooperation in their fields of interest. Additionally, software and book pre sentations as well as several tutorial courses were organized. The scientific program of the conference included 18 plenary or semi plenary lectures and more than 100 presentations in special sections. The peer-reviewed papers are presented in 5 chapters as follows: * Data Analysis and Classification * Computer Science, Computational Statistics, and Data Mining * Management Science, Marketing, and Finance * Biology, Genome Analysis, and Medicine * Text Analysis and Information Retrieval As an unambiguous assignment of results to single chapters is sometimes difficult papers are grouped in a way that the editors found appropriate.
'Refreshingly clear and engaging' Tim Harford 'Delightful . . . full of unique insights' Prof Sir David Spiegelhalter There's no getting away from statistics. We encounter them every day. We are all users of statistics whether we like it or not. Do missed appointments really cost the NHS GBP1bn per year? What's the difference between the mean gender pay gap and the median gender pay gap? How can we work out if a claim that we use 42 billion single-use plastic straws per year in the UK is accurate? What did the Vote Leave campaign's GBP350m bus really mean? How can we tell if the headline 'Public pensions cost you GBP4,000 a year' is correct? Does snow really cost the UK economy GBP1bn per day? But how do we distinguish statistical fact from fiction? What can we do to decide whether a number, claim or news story is accurate? Without an understanding of data, we cannot truly understand what is going on in the world around us. Written by Anthony Reuben, the BBC's first head of statistics, Statistical is an accessible and empowering guide to challenging the numbers all around us.
Most governments in today's market economies spend significant sums of money on labour market programmes. The declared aims of these programmes are to increase the re-employment chances of the unemployed. This book investigates which active labour market programmes in Poland are value for money and which are not. To this end, modern statistical methods are applied to both macro- and microeconomic data. It is shown that training programmes increase, whereas job subsidies and public works decrease the re-employment opportunities of the unemployed. In general, all active labour market policy effects are larger in absolute size for men than for women. By surveying previous studies in the field and outlining the major statistical approaches that are employed in the evaluation literature, the book can be of help to any student interested in programme evaluation irrespective of the paticular programme or country concerned.
nd Selected papers presented at the 22 Annual Conference of the German Classification Society GfKI (Gesellschaft fUr Klassifikation), held at the Uni- versity of Dresden in 1998, are contained in this volume of "Studies in Clas- sification, Data Analysis, and Knowledge Organization" . One aim of GfKI was to provide a platform for a discussion of results con- cerning a challenge of growing importance that could be labeled as "Classi- fication in the Information Age" and to support interdisciplinary activities from research and applications that incorporate directions of this kind. As could be expected, the largest share of papers is closely related to classi- fication and-in the broadest sense-data analysis and statistics. Additionally, besides contributions dealing with questions arising from the usage of new media and the internet, applications in, e.g., (in alphabetical order) archeolo- gy, bioinformatics, economics, environment, and health have been reported. As always, an unambiguous assignment of results to single topics is some- times difficult, thus, from more than 130 presentations offered within the scientific program 65 papers are grouped into the following chapters and subchapters: * Plenary and Semi Plenary Presentations - Classification and Information - Finance and Risk * Classification and Related Aspects of Data Analysis and Learning - Classification, Data Analysis, and Statistics - Conceptual Analysis and Learning * Usage of New Media and the Internet - Information Systems, Multimedia, and WWW - Navigation and Classification on the Internet and Virtual Univ- sities * Applications in Economics
Discover how statistical information impacts decisions in today's business world as Anderson/Sweeney/Williams/Camm/Cochran/Fry/Ohlmann's leading ESSENTIALS OF STATISTICS FOR BUSINESS AND ECONOMICS, 9E connects concepts in each chapter to real-world practice. This edition delivers sound statistical methodology, a proven problem-scenario approach and meaningful applications that reflect the latest developments in business and statistics today. More than 350 new and proven real business examples, a wealth of practical cases and meaningful hands-on exercises highlight statistics in action. You gain practice using leading professional statistical software with exercises and appendices that walk you through using JMP (R) Student Edition 14 and Excel (R) 2016. WebAssign's online course management systems is available separately to further strengthen this business statistics approach and helps you maximize your course success.
Do economics and statistics succeed in explaining human social behaviour? To answer this question. Leland Gerson Neuberg studies some pioneering controlled social experiments. Starting in the late 1960s, economists and statisticians sought to improve social policy formation with random assignment experiments such as those that provided income guarantees in the form of a negative income tax. This book explores anomalies in the conceptual basis of such experiments and in the foundations of statistics and economics more generally. Scientific inquiry always faces certain philosophical problems. Controlled experiments of human social behaviour, however, cannot avoid some methodological difficulties not evident in physical science experiments. Drawing upon several examples, the author argues that methodological anomalies prevent microeconomics and statistics from explaining human social behaviour as coherently as the physical sciences explain nature. He concludes that controlled social experiments are a frequently overrated tool for social policy improvement.
The chapter starts with a positioning of this dissertation in the marketing discipline. It then provides a comparison of the two most popular methods for studying consumer preferences/choices, namely conjoint analysis and discrete choice experiments. Chapter 1 continues with a description of the context of discrete choice experiments. Subsequently, the research problems and the objectives ofthis dissertation are discussed. The chapter concludes with an outline of the organization of this dissertation. 1. 1 Positioning of the Dissertation During this century, increasing globalization and technological progress has forced companies to undergo rapid and dramatic changes-for some a threat, for others it offers new opportunities. Companies have to survive in a Darwinian marketplace where the principle of natural selection applies. Marketplace success goes to those companies that are able to produce marketable value, Le. , products and services that others are willing to purchase (Kotler 1997). Every company must be engaged in new-product development to create the new products customers want because competitors will do their best to supply them. Besides offering competitive advantages, new products usually lead to sales growth and stability. As household incomes increase and consumers become more selective, fmns need to know how consumers respond to different features and appeals. Successful products and services begin with a thorough understanding of consumer needs and wants. Stated otherwise, companies need to know about consumer preferences to manufacture tailor-made products, consumers are willing to buy.
Quantile regression constitutes an ensemble of statistical techniques intended to estimate and draw inferences about conditional quantile functions. Median regression, as introduced in the 18th century by Boscovich and Laplace, is a special case. In contrast to conventional mean regression that minimizes sums of squared residuals, median regression minimizes sums of absolute residuals; quantile regression simply replaces symmetric absolute loss by asymmetric linear loss. Since its introduction in the 1970's by Koenker and Bassett, quantile regression has been gradually extended to a wide variety of data analytic settings including time series, survival analysis, and longitudinal data. By focusing attention on local slices of the conditional distribution of response variables it is capable of providing a more complete, more nuanced view of heterogeneous covariate effects. Applications of quantile regression can now be found throughout the sciences, including astrophysics, chemistry, ecology, economics, finance, genomics, medicine, and meteorology. Software for quantile regression is now widely available in all the major statistical computing environments. The objective of this volume is to provide a comprehensive review of recent developments of quantile regression methodology illustrating its applicability in a wide range of scientific settings. The intended audience of the volume is researchers and graduate students across a diverse set of disciplines.
Like the preceding volumes, and met with a lively response, the present volume is collecting contributions stressed on methodology or successful industrial applications. The papers are classified under four main headings: sampling inspection, process quality control, data analysis and process capability studies and finally experimental design.
In the first part of this book bargaining experiments with different economic and ethical frames are investigated. The distributive principles and norms the subjects apply and their justifications for these principles are evaluated. The bargaining processes and the resulting agreements are analyzed. In the second part different bargaining theories are presented and the corresponding solutions are axiomatically characterized. A bargaining concept with goals that depend on economic and ethical features of the bargaining situation is introduced. Observations from the experimental data lead to the ideas for the axiomatic characterization of a bargaining solution with goals.
The Who, What, and Where of America is designed to provide a sampling of key demographic information. It covers the United States, every state, each metropolitan statistical area, and all the counties and cities with a population of 20,000 or more. Who: Age, Race and Ethnicity, and Household Structure What: Education, Employment, and Income Where: Migration, Housing, and Transportation Each part is preceded by highlights and ranking tables that show how areas diverge from the national norm. These research aids are invaluable for understanding data from the ACS and for highlighting what it tells us about who we are, what we do, and where we live. Each topic is divided into four tables revealing the results of the data collected from different types of geographic areas in the United States, generally with populations greater than 20,000. ·Table A. States ·Table B. Counties ·Table C. Metropolitan Areas ·Table D. Cities In this edition, you will find social and economic estimates on the ways American communities are changing with regard to the following: ·Age and race ·Health care coverage ·Marital history ·Education attainment ·Income and occupation ·Commute time to work ·Employment status ·Home values and monthly costs ·Veteran status ·Size of home or rental unit This title is the latest in the County and City Extra Series of publications from Bernan Press. Other titles include County and City Extra, County and City Extra: Special Decennial Census Edition, and Places, Towns, and Townships.
In 1991, a subcommittee of the Federal Committee on Statistical Methodology met to document the use of indirect estimators - that is, estimators which use data drawn from a domain or time different from the domain or time for which an estimate is required. This volume comprises the eight reports which describe the use of indirect estimators and they are based on case studies from a variety of federal programs. As a result, many researchers will find this book provides a valuable survey of how indirect estimators are used in practice and which addresses some of the pitfalls of these methods.
In order to obtain many of the classical results in the theory of statistical estimation, it is usual to impose regularity conditions on the distributions under consideration. In small sample and large sample theories of estimation there are well established sets of regularity conditions, and it is worth while to examine what may follow if any one of these regularity conditions fail to hold. "Non-regular estimation" literally means the theory of statistical estimation when some or other of the regularity conditions fail to hold. In this monograph, the authors present a systematic study of the meaning and implications of regularity conditions, and show how the relaxation of such conditions can often lead to surprising conclusions. Their emphasis is on considering small sample results and to show how pathological examples may be considered in this broader framework.
Since the first edition of this book published, Bayesian networks have become even more important for applications in a vast array of fields. This second edition includes new material on influence diagrams, learning from data, value of information, cybersecurity, debunking bad statistics, and much more. Focusing on practical real-world problem-solving and model building, as opposed to algorithms and theory, it explains how to incorporate knowledge with data to develop and use (Bayesian) causal models of risk that provide more powerful insights and better decision making than is possible from purely data-driven solutions. Features Provides all tools necessary to build and run realistic Bayesian network models Supplies extensive example models based on real risk assessment problems in a wide range of application domains provided; for example, finance, safety, systems reliability, law, forensics, cybersecurity and more Introduces all necessary mathematics, probability, and statistics as needed Establishes the basics of probability, risk, and building and using Bayesian network models, before going into the detailed applications A dedicated website contains exercises and worked solutions for all chapters along with numerous other resources. The AgenaRisk software contains a model library with executable versions of all of the models in the book. Lecture slides are freely available to accredited academic teachers adopting the book on their course.
In many branches of science relevant observations are taken sequentially over time. Bayesian Analysis of Time Series discusses how to use models that explain the probabilistic characteristics of these time series and then utilizes the Bayesian approach to make inferences about their parameters. This is done by taking the prior information and via Bayes theorem implementing Bayesian inferences of estimation, testing hypotheses, and prediction. The methods are demonstrated using both R and WinBUGS. The R package is primarily used to generate observations from a given time series model, while the WinBUGS packages allows one to perform a posterior analysis that provides a way to determine the characteristic of the posterior distribution of the unknown parameters. Features Presents a comprehensive introduction to the Bayesian analysis of time series. Gives many examples over a wide variety of fields including biology, agriculture, business, economics, sociology, and astronomy. Contains numerous exercises at the end of each chapter many of which use R and WinBUGS. Can be used in graduate courses in statistics and biostatistics, but is also appropriate for researchers, practitioners and consulting statisticians. About the author Lyle D. Broemeling, Ph.D., is Director of Broemeling and Associates Inc., and is a consulting biostatistician. He has been involved with academic health science centers for about 20 years and has taught and been a consultant at the University of Texas Medical Branch in Galveston, The University of Texas MD Anderson Cancer Center and the University of Texas School of Public Health. His main interest is in developing Bayesian methods for use in medical and biological problems and in authoring textbooks in statistics. His previous books for Chapman & Hall/CRC include Bayesian Biostatistics and Diagnostic Medicine, and Bayesian Methods for Agreement.
Eine speziell fur Wirtschafts- und Sozialwissenschaftler geeignete Einfuhrung in die Grundlagen der Statistik und deren computergestutzte Anwendung. Aus dem Inhalt: Datenerfassung und Datenmodifikation. Haufigkeitsverteilungen und deskriptive Statistiken. Explorative Datenanalyse. Kreuztabellen und Assoziationsmasse. Testverfahren. Korrelationsmasse. Streudiagramme. Regressionsanalyse. Trendanalysen und Kurvenanpassung. Zeitreihenanalyse. Faktorenanalyse. Clusteranalyse. Diskriminanzanalyse. Aufgaben." |
You may like...
Operations and Supply Chain Management
James Evans, David Collier
Hardcover
Operations And Supply Chain Management
David Collier, James Evans
Hardcover
Kwantitatiewe statistiese tegnieke
Swanepoel Swanepoel, Vivier Vivier, …
Book
Statistics for Business and Economics…
Paul Newbold, William Carlson, …
Paperback
R2,397
Discovery Miles 23 970
Advances in Contemporary Statistics and…
Abdelaati Daouia, Anne Ruiz-Gazen
Hardcover
R5,586
Discovery Miles 55 860
Patterns of Economic Change by State and…
Hannah Anderson Krog
Paperback
R2,776
Discovery Miles 27 760
The Dynamics of Industrial Collaboration…
Anne Plunket, Colette Voisin, …
Hardcover
R3,158
Discovery Miles 31 580
|