![]() |
![]() |
Your cart is empty |
||
Books > Business & Economics > Economics > Econometrics > Economic statistics
If you are a manager who receives the results of any data analyst's work to help with your decision-making, this book is for you. Anyone playing a role in the field of analytics can benefit from this book as well. In the two decades the editors of this book spent teaching and consulting in the field of analytics, they noticed a critical shortcoming in the communication abilities of many analytics professionals. Specifically, analysts have difficulty in articulating in business terms what their analyses showed and what actionable recommendations were made. When analysts made presentations, they tended to lapse into the technicalities of mathematical procedures, rather than focusing on the strategic and tactical impact and meaning of their work. As analytics has become more mainstream and widespread in organizations, this problem has grown more acute. Data Analytics: Effective Methods for Presenting Results tackles this issue. The editors have used their experience as presenters and audience members who have become lost during presentation. Over the years, they experimented with different ways of presenting analytics work to make a more compelling case to top managers. They have discovered tried and true methods for improving presentations, which they share. The book also presents insights from other analysts and managers who share their own experiences. It is truly a collection of experiences and insight from academics and professionals involved with analytics. The book is not a primer on how to draw the most beautiful charts and graphs or about how to perform any specific kind of analysis. Rather, it shares the experiences of professionals in various industries about how they present their analytics results effectively. They tell their stories on how to win over audiences. The book spans multiple functional areas within a business, and in some cases, it discusses how to adapt presentations to the needs of audiences at different levels of management.
"[Taleb is] Wall Street's principal dissident. . . . [Fooled By
Randomness] is to conventional Wall Street wisdom approximately
what Martin Luther's ninety-nine theses were to the Catholic
Church."
This third edition of Braun and Murdoch's bestselling textbook now includes discussion of the use and design principles of the tidyverse packages in R, including expanded coverage of ggplot2, and R Markdown. The expanded simulation chapter introduces the Box-Muller and Metropolis-Hastings algorithms. New examples and exercises have been added throughout. This is the only introduction you'll need to start programming in R, the computing standard for analyzing data. This book comes with real R code that teaches the standards of the language. Unlike other introductory books on the R system, this book emphasizes portable programming skills that apply to most computing languages and techniques used to develop more complex projects. Solutions, datasets, and any errata are available from www.statprogr.science. Worked examples - from real applications - hundreds of exercises, and downloadable code, datasets, and solutions make a complete package for anyone working in or learning practical data science.
Technical Analysis of Stock Trends helps investors make smart, profitable trading decisions by providing proven long- and short-term stock trend analysis. It gets right to the heart of effective technical trading concepts, explaining technical theory such as The Dow Theory, reversal patterns, consolidation formations, trends and channels, technical analysis of commodity charts, and advances in investment technology. It also includes a comprehensive guide to trading tactics from long and short goals, stock selection, charting, low and high risk, trend recognition tools, balancing and diversifying the stock portfolio, application of capital, and risk management. This updated new edition includes patterns and modifiable charts that are tighter and more illustrative. Expanded material is also included on Pragmatic Portfolio Theory as a more elegant alternative to Modern Portfolio Theory; and a newer, simpler, and more powerful alternative to Dow Theory is presented. This book is the perfect introduction, giving you the knowledge and wisdom to craft long-term success.
The complete guide to statistical modelling with GENSTAT Focusing on solving practical problems and using real datasets collected during research of various sorts, Statistical Modelling Using GENSTAT emphasizes developing and understanding statistical tools. Throughout the text, these statistical tools are applied to answer the very questions the original researchers sought to answer. GENSTAT, the powerful statistical software, is introduced early in the book and practice problems are carried out using the software, in the process helping students to understand the application of statistical methods to real-world data.
The process of transforming data into actionable knowledge is a complex process that requires the use of powerful machines and advanced analytics technique. Analytics and Knowledge Management examines the role of analytics in knowledge management and the integration of big data theories, methods, and techniques into an organizational knowledge management framework. Its chapters written by researchers and professionals provide insight into theories, models, techniques, and applications with case studies examining the use of analytics in organizations. The process of transforming data into actionable knowledge is a complex process that requires the use of powerful machines and advanced analytics techniques. Analytics, on the other hand, is the examination, interpretation, and discovery of meaningful patterns, trends, and knowledge from data and textual information. It provides the basis for knowledge discovery and completes the cycle in which knowledge management and knowledge utilization happen. Organizations should develop knowledge focuses on data quality, application domain, selecting analytics techniques, and on how to take actions based on patterns and insights derived from analytics. Case studies in the book explore how to perform analytics on social networking and user-based data to develop knowledge. One case explores analyze data from Twitter feeds. Another examines the analysis of data obtained through user feedback. One chapter introduces the definitions and processes of social media analytics from different perspectives as well as focuses on techniques and tools used for social media analytics. Data visualization has a critical role in the advancement of modern data analytics, particularly in the field of business intelligence and analytics. It can guide managers in understanding market trends and customer purchasing patterns over time. The book illustrates various data visualization tools that can support answering different types of business questions to improve profits and customer relationships. This insightful reference concludes with a chapter on the critical issue of cybersecurity. It examines the process of collecting and organizing data as well as reviewing various tools for text analysis and data analytics and discusses dealing with collections of large datasets and a great deal of diverse data types from legacy system to social networks platforms.
Developed over 20 years of teaching academic courses, the Handbook of Financial Risk Management can be divided into two main parts: risk management in the financial sector; and a discussion of the mathematical and statistical tools used in risk management. This comprehensive text offers readers the chance to develop a sound understanding of financial products and the mathematical models that drive them, exploring in detail where the risks are and how to manage them. Key Features: Written by an author with both theoretical and applied experience Ideal resource for students pursuing a master's degree in finance who want to learn risk management Comprehensive coverage of the key topics in financial risk management Contains 114 exercises, with solutions provided online at www.crcpress.com/9781138501874
Bernan Press proudly presents the 14th edition of Employment, Hours, and Earnings: States and Areas, 2019. A special addition to Bernan Press's Handbook of U.S. Labor Statistics: Employment, Earnings, Prices, Productivity, and Other Labor Data, this reference is a consolidated wealth of employment information, providing monthly and annual data on hours worked and earnings made by industry, including figures and summary information spanning several years. These data are presented for states and metropolitan statistical areas. This edition features: *Nearly 300 tables with data on employment for each state, the District of Columbia, and the nation's seventy-five largest metropolitan statistical areas (MSAs) *Detailed, non-seasonally adjusted, industry data organized by month and year *Hours and earnings data for each state, by industry *An introduction for each state and the District of Columbia that denotes salient data and noteworthy trends, including changes in population and the civilian labor force, industry increases and declines, employment and unemployment statistics, and a chart detailing employment percentages, by industry *Ranking of the seventy-five largest MSAs, including census population estimates, unemployment rates, and the percent change in total nonfarm employment, *Concise technical notes that explain pertinent facts about the data, including sources, definitions, and significant changes; and provides references for further guidance *A comprehensive appendix that details the geographical components of the seventy-five largest MSAs The employment, hours, and earnings data in this publication provide a detailed and timely picture of the fifty states, the District of Columbia, and the nation's seventy-five largest MSAs. These data can be used to analyze key factors affecting state and local economies and to compare national cyclical trends to local-level economic activity. This reference is an excellent source of information for analysts in both the public and private sectors. Readers who are involved in public policy can use these data to determine the health of the economy, to clearly identify which sectors are growing and which are declining, and to determine the need for federal assistance. State and local jurisdictions can use the data to determine the need for services, including training and unemployment assistance, and for planning and budgetary purposes. In addition, the data can be used to forecast tax revenue. In private industry, the data can be used by business owners to compare their business to the economy as a whole; and to identify suitable areas when making decisions about plant locations, wholesale and retail trade outlets, and for locating a particular sector base.
This book provides an introduction to the use of statistical concepts and methods to model and analyze financial data. The ten chapters of the book fall naturally into three sections. Chapters 1 to 3 cover some basic concepts of finance, focusing on the properties of returns on an asset. Chapters 4 through 6 cover aspects of portfolio theory and the methods of estimation needed to implement that theory. The remainder of the book, Chapters 7 through 10, discusses several models for financial data, along with the implications of those models for portfolio theory and for understanding the properties of return data. The audience for the book is students majoring in Statistics and Economics as well as in quantitative fields such as Mathematics and Engineering. Readers are assumed to have some background in statistical methods along with courses in multivariate calculus and linear algebra.
Given the huge amount of information in the internet and in practically every domain of knowledge that we are facing today, knowledge discovery calls for automation. The book deals with methods from classification and data analysis that respond effectively to this rapidly growing challenge. The interested reader will find new methodological insights as well as applications in economics, management science, finance, and marketing, and in pattern recognition, biology, health, and archaeology.
This book includes many of the papers presented at the 6th International workshop on Model Oriented Data Analysis held in June 2001. This series began in March 1987 with a meeting on the Wartburg near Eisenach (at that time in the GDR). The next four meetings were in 1990 (St Kyrik monastery, Bulgaria), 1992 (Petrodvorets, St Petersburg, Russia), 1995 (Spetses, Greece) and 1998 (Marseilles, France). Initially the main purpose of these workshops was to bring together leading scientists from 'Eastern' and 'Western' Europe for the exchange of ideas in theoretical and applied statistics, with special emphasis on experimental design. Now that the sep aration between East and West is much less rigid, this exchange has, in principle, become much easier. However, it is still important to provide opportunities for this interaction. MODA meetings are celebrated for their friendly atmosphere. Indeed, dis cussions between young and senior scientists at these meetings have resulted in several fruitful long-term collaborations. This intellectually stimulating atmosphere is achieved by limiting the number of participants to around eighty, by the choice of a location in which communal living is encour aged and, of course, through the careful scientific direction provided by the Programme Committee. It is a tradition of these meetings to provide low cost accommodation, low fees and financial support for the travel of young and Eastern participants. This is only possible through the help of sponsors and outside financial support was again important for the success of the meeting."
In the 1920's, Walter Shewhart visualized that the marriage of statistical methods and manufacturing processes would produce reliable and consistent quality products. Shewhart (1931) conceived the idea of statistical process control (SPC) and developed the well-known and appropriately named Shewhart control chart. However, from the 1930s to the 1990s, literature on SPC schemes have been "captured" by the Shewhart paradigm of normality, independence and homogeneous variance. When in fact, the problems facing today's industries are more inconsistent than those faced by Shewhart in the 1930s. As a result of the advances in machine and sensor technology, process data can often be collected on-line. In this situation, the process observations that result from data collection activities will frequently not be serially independent, but autocorrelated. Autocorrelation has a significant impact on a control chart: the process may not exhibit a state of statistical control when in fact, it is in control. As the prevalence of this type of data is expected to increase in industry (Hahn 1989), so does the need to control and monitor it. Equivalently, literature has reflected this trend, and research in the area of SPC with autocorrelated data continues so that effective methods of handling correlated data are available. This type of data regularly occurs in the chemical and process industries, and is pervasive in computer-integrated manufacturing environments, clinical laboratory settings and in the majority of SPC applications across various manufacturing and service industries (Alwan 1991).
Originally published in 1987. This collection of original papers deals with various issues of specification in the context of the linear statistical model. The volume honours the early econometric work of Donald Cochrane, late Dean of Economics and Politics at Monash University in Australia. The chapters focus on problems associated with autocorrelation of the error term in the linear regression model and include appraisals of early work on this topic by Cochrane and Orcutt. The book includes an extensive survey of autocorrelation tests; some exact finite-sample tests; and some issues in preliminary test estimation. A wide range of other specification issues is discussed, including the implications of random regressors for Bayesian prediction; modelling with joint conditional probability functions; and results from duality theory. There is a major survey chapter dealing with specification tests for non-nested models, and some of the applications discussed by the contributors deal with the British National Accounts and with Australian financial and housing markets.
The most widely used statistical method in seasonal adjustment is without doubt that implemented in the X-11 Variant of the Census Method II Seasonal Adjustment Program. Developed at the US Bureau of the Census in the 1950's and 1960's, this computer program has undergone numerous modifications and improvements, leading especially to the X-11-ARIMA software packages in 1975 and 1988 and X-12-ARIMA, the first beta version of which is dated 1998. While these software packages integrate, to varying degrees, parametric methods, and especially the ARIMA models popularized by Box and Jenkins, they remain in essence very close to the initial X-11 method, and it is this "core" that Seasonal Adjustment with the X-11 Method focuses on. With a Preface by Allan Young, the authors document the seasonal adjustment method implemented in the X-11 based software. It will be an important reference for government agencies, macroeconomists, and other serious users of economic data. After some historical notes, the authors outline the X-11 methodology. One chapter is devoted to the study of moving averages with an emphasis on those used by X-11. Readers will also find a complete example of seasonal adjustment, and have a detailed picture of all the calculations. The linear regression models used for trading-day effects and the process of detecting and correcting extreme values are studied in the example. The estimation of the Easter effect is dealt with in a separate chapter insofar as the models used in X-11-ARIMA and X-12-ARIMA are appreciably different. Dominique Ladiray is an Administrateur at the French Institut National de la Statistique et des Etudes Economiques. He is also a Professor at the Ecole Nationale de la Statistique et de l'Administration Economique, and at the Ecole Nationale de la Statistique et de l'Analyse de l'Information. He currently works on short-term economic analysis. Benoît Quenneville is a methodologist with Statistics Canada Time Series Research and Analysis Centre. He holds a Ph.D. from the University of Western Ontario. His research interests are in time series analysis with an emphasis on official statistics.
Statistics for Business is meant as a textbook for students in business, computer science, bioengineering, environmental technology, and mathematics. In recent years, business statistics is used widely for decision making in business endeavours. It emphasizes statistical applications, statistical model building, and determining the manual solution methods. Special Features: This text is prepared based on "self-taught" method. For most of the methods, the required algorithm is clearly explained using flow-charting methodology. More than 200 solved problems provided. More than 175 end-of-chapter exercises with answers are provided. This allows teachers ample flexibility in adopting the textbook to their individual class plans. This textbook is meant to for beginners and advanced learners as a text in Statistics for Business or Applied Statistics for undergraduate and graduate students.
This volume contains revised versions of selected papers presented dur ing the 23rd Annual Conference of the German Classification Society GfKl (Gesellschaft fiir Klassifikation). The conference took place at the Univer sity of Bielefeld (Germany) in March 1999 under the title "Classification and Information Processing at the Turn of the Millennium". Researchers and practitioners - interested in data analysis, classification, and information processing in the broad sense, including computer science, multimedia, WWW, knowledge discovery, and data mining as well as spe cial application areas such as (in alphabetical order) biology, finance, genome analysis, marketing, medicine, public health, and text analysis - had the op portunity to discuss recent developments and to establish cross-disciplinary cooperation in their fields of interest. Additionally, software and book pre sentations as well as several tutorial courses were organized. The scientific program of the conference included 18 plenary or semi plenary lectures and more than 100 presentations in special sections. The peer-reviewed papers are presented in 5 chapters as follows: * Data Analysis and Classification * Computer Science, Computational Statistics, and Data Mining * Management Science, Marketing, and Finance * Biology, Genome Analysis, and Medicine * Text Analysis and Information Retrieval As an unambiguous assignment of results to single chapters is sometimes difficult papers are grouped in a way that the editors found appropriate.
Most governments in today's market economies spend significant sums of money on labour market programmes. The declared aims of these programmes are to increase the re-employment chances of the unemployed. This book investigates which active labour market programmes in Poland are value for money and which are not. To this end, modern statistical methods are applied to both macro- and microeconomic data. It is shown that training programmes increase, whereas job subsidies and public works decrease the re-employment opportunities of the unemployed. In general, all active labour market policy effects are larger in absolute size for men than for women. By surveying previous studies in the field and outlining the major statistical approaches that are employed in the evaluation literature, the book can be of help to any student interested in programme evaluation irrespective of the paticular programme or country concerned.
nd Selected papers presented at the 22 Annual Conference of the German Classification Society GfKI (Gesellschaft fUr Klassifikation), held at the Uni- versity of Dresden in 1998, are contained in this volume of "Studies in Clas- sification, Data Analysis, and Knowledge Organization" . One aim of GfKI was to provide a platform for a discussion of results con- cerning a challenge of growing importance that could be labeled as "Classi- fication in the Information Age" and to support interdisciplinary activities from research and applications that incorporate directions of this kind. As could be expected, the largest share of papers is closely related to classi- fication and-in the broadest sense-data analysis and statistics. Additionally, besides contributions dealing with questions arising from the usage of new media and the internet, applications in, e.g., (in alphabetical order) archeolo- gy, bioinformatics, economics, environment, and health have been reported. As always, an unambiguous assignment of results to single topics is some- times difficult, thus, from more than 130 presentations offered within the scientific program 65 papers are grouped into the following chapters and subchapters: * Plenary and Semi Plenary Presentations - Classification and Information - Finance and Risk * Classification and Related Aspects of Data Analysis and Learning - Classification, Data Analysis, and Statistics - Conceptual Analysis and Learning * Usage of New Media and the Internet - Information Systems, Multimedia, and WWW - Navigation and Classification on the Internet and Virtual Univ- sities * Applications in Economics
The chapter starts with a positioning of this dissertation in the marketing discipline. It then provides a comparison of the two most popular methods for studying consumer preferences/choices, namely conjoint analysis and discrete choice experiments. Chapter 1 continues with a description of the context of discrete choice experiments. Subsequently, the research problems and the objectives ofthis dissertation are discussed. The chapter concludes with an outline of the organization of this dissertation. 1. 1 Positioning of the Dissertation During this century, increasing globalization and technological progress has forced companies to undergo rapid and dramatic changes-for some a threat, for others it offers new opportunities. Companies have to survive in a Darwinian marketplace where the principle of natural selection applies. Marketplace success goes to those companies that are able to produce marketable value, Le. , products and services that others are willing to purchase (Kotler 1997). Every company must be engaged in new-product development to create the new products customers want because competitors will do their best to supply them. Besides offering competitive advantages, new products usually lead to sales growth and stability. As household incomes increase and consumers become more selective, fmns need to know how consumers respond to different features and appeals. Successful products and services begin with a thorough understanding of consumer needs and wants. Stated otherwise, companies need to know about consumer preferences to manufacture tailor-made products, consumers are willing to buy.
Like the preceding volumes, and met with a lively response, the present volume is collecting contributions stressed on methodology or successful industrial applications. The papers are classified under four main headings: sampling inspection, process quality control, data analysis and process capability studies and finally experimental design.
In the first part of this book bargaining experiments with different economic and ethical frames are investigated. The distributive principles and norms the subjects apply and their justifications for these principles are evaluated. The bargaining processes and the resulting agreements are analyzed. In the second part different bargaining theories are presented and the corresponding solutions are axiomatically characterized. A bargaining concept with goals that depend on economic and ethical features of the bargaining situation is introduced. Observations from the experimental data lead to the ideas for the axiomatic characterization of a bargaining solution with goals.
In 1991, a subcommittee of the Federal Committee on Statistical Methodology met to document the use of indirect estimators - that is, estimators which use data drawn from a domain or time different from the domain or time for which an estimate is required. This volume comprises the eight reports which describe the use of indirect estimators and they are based on case studies from a variety of federal programs. As a result, many researchers will find this book provides a valuable survey of how indirect estimators are used in practice and which addresses some of the pitfalls of these methods.
In order to obtain many of the classical results in the theory of statistical estimation, it is usual to impose regularity conditions on the distributions under consideration. In small sample and large sample theories of estimation there are well established sets of regularity conditions, and it is worth while to examine what may follow if any one of these regularity conditions fail to hold. "Non-regular estimation" literally means the theory of statistical estimation when some or other of the regularity conditions fail to hold. In this monograph, the authors present a systematic study of the meaning and implications of regularity conditions, and show how the relaxation of such conditions can often lead to surprising conclusions. Their emphasis is on considering small sample results and to show how pathological examples may be considered in this broader framework.
Eine speziell fur Wirtschafts- und Sozialwissenschaftler geeignete Einfuhrung in die Grundlagen der Statistik und deren computergestutzte Anwendung. Aus dem Inhalt: Datenerfassung und Datenmodifikation. Haufigkeitsverteilungen und deskriptive Statistiken. Explorative Datenanalyse. Kreuztabellen und Assoziationsmasse. Testverfahren. Korrelationsmasse. Streudiagramme. Regressionsanalyse. Trendanalysen und Kurvenanpassung. Zeitreihenanalyse. Faktorenanalyse. Clusteranalyse. Diskriminanzanalyse. Aufgaben."
In many branches of science relevant observations are taken sequentially over time. Bayesian Analysis of Time Series discusses how to use models that explain the probabilistic characteristics of these time series and then utilizes the Bayesian approach to make inferences about their parameters. This is done by taking the prior information and via Bayes theorem implementing Bayesian inferences of estimation, testing hypotheses, and prediction. The methods are demonstrated using both R and WinBUGS. The R package is primarily used to generate observations from a given time series model, while the WinBUGS packages allows one to perform a posterior analysis that provides a way to determine the characteristic of the posterior distribution of the unknown parameters. Features Presents a comprehensive introduction to the Bayesian analysis of time series. Gives many examples over a wide variety of fields including biology, agriculture, business, economics, sociology, and astronomy. Contains numerous exercises at the end of each chapter many of which use R and WinBUGS. Can be used in graduate courses in statistics and biostatistics, but is also appropriate for researchers, practitioners and consulting statisticians. About the author Lyle D. Broemeling, Ph.D., is Director of Broemeling and Associates Inc., and is a consulting biostatistician. He has been involved with academic health science centers for about 20 years and has taught and been a consultant at the University of Texas Medical Branch in Galveston, The University of Texas MD Anderson Cancer Center and the University of Texas School of Public Health. His main interest is in developing Bayesian methods for use in medical and biological problems and in authoring textbooks in statistics. His previous books for Chapman & Hall/CRC include Bayesian Biostatistics and Diagnostic Medicine, and Bayesian Methods for Agreement. |
![]() ![]() You may like...
Quantitative statistical techniques
Swanepoel Swanepoel, Vivier Vivier, …
Paperback
![]() R627 Discovery Miles 6 270
Statistics for Business and Economics…
Paul Newbold, William Carlson, …
Paperback
R1,807
Discovery Miles 18 070
Operations and Supply Chain Management
James Evans, David Collier
Hardcover
Contemporary Perspectives in Data Mining…
Kenneth D. Lawrence, Ronald K. Klimberg
Hardcover
R3,040
Discovery Miles 30 400
|