|
Showing 1 - 7 of
7 matches in All Departments
For over 100 years, the evolution of modern survey
methodology--using the theory of representative sampling to make
inferences from a part of the population to the whole--has been
paralleled by a drive toward automation, harnessing technology and
computerization to make parts of the survey process easier, faster,
and better. The availability of portable computers in the late
1980s ushered in computer-assisted personal interviewing (CAPl), in
which interviewers administer a survey instrument to respondents
using a computerized version of the questionnaire on a portable
laptop computer. Computer assisted interviewing (CAI) methods have
proven to be extremely useful and beneficial in survey
administration. However, the practical problems encountered in
documentation and testing CAI instruments suggest that this is an
opportune time to reexamine not only the process of developing CAI
instruments but also the future directions of survey automation
writ large.
In 1982 the Census Bureau requested the Committee on National
Statistics to establish a panel to suggest research and
experiments, to recommend improved methods, and to guide the Census
Bureau on technical problems in appraising contending methods with
regard to the conduct of the decennial census. In response, the
panel produced an interim report that focused on recommendations
for improvements in census methodology that warranted early
investigation and testing. This report updates and expands the
ideas and conclusions about decennial census methodology. Table of
Contents Front Matter 1 Introduction 2 Purposes and Uses of the
Decennial Census 3 Census Methodology: Prior Practice and Current
Test Plans 4 Evaluating the Decennial Census: Past Experience 5
Taking the Census I: Improving the Count 6 Taking the Census II:
The Uses of Sampling and Administrative Records 7 Adjustment of
Population Counts 8 Measuring the Completeness of the 1990 Census
References Biographical Sketches of Panel Members and Staff Index
Committee on National Statistics
In 2014 the National Science Foundation (NSF) provided support to
the National Academies of Sciences, Engineering, and Medicine for a
series of Forums on Open Science in response to a government-wide
directive to support increased public access to the results of
research funded by the federal government. However, the breadth of
the work resulting from the series precluded a focus on any
specific topic or discussion about how to improve public access.
Thus, the main goal of the Workshop on Transparency and
Reproducibility in Federal Statistics was to develop some
understanding of what principles and practices are, or would be,
supportive of making federal statistics more understandable and
reviewable, both by agency staff and the public. This publication
summarizes the presentations and discussions from the workshop.
Table of Contents Front Matter 1 Introduction 2 Existing Guidelines
Related to Transparency 3 Benefits and Costs of Transparency: Views
from Three Statistical Agencies 4 Benefits and Costs of
Transparency: Views from the United Kingdom and Canada 5 Two U.S.
Examples: SAIPE and LEHD 6 Operationalizing Transparency 7
Summarizing Day 1 8 Standards for Metadata and Work Processes 9
Possible Next Steps Appendix A: Workshop Agenda Appendix B: List of
Participants Committee on National Statistics
|
Innovations in Software Engineering for Defense Systems (Paperback)
National Research Council, Division on Engineering and Physical Sciences, Committee on Applied and Theoretical Statistics, Division of Behavioral and Social Sciences and Education, Committee on National Statistics, …
|
R1,243
Discovery Miles 12 430
|
Ships in 12 - 17 working days
|
Recent rough estimates are that the U.S. Department of Defense
(DoD) spends at least $38 billion a year on the research,
development, testing, and evaluation of new defense systems;
approximately 40 percent of that cost-at least $16 billion-is spent
on software development and testing. There is widespread
understanding within DoD that the effectiveness of
software-intensive defense systems is often hampered by low-quality
software as well as increased costs and late delivery of software
components. Given the costs involved, even relatively incremental
improvements to the software development process for defense
systems could represent a large savings in funds. And given the
importance of producing defense software that will carry out its
intended function, relatively small improvements to the quality of
defense software systems would be extremely important to identify.
DoD software engineers and test and evaluation officials may not be
fully aware of a range of available techniques, because of both the
recent development of these techniques and their origination from
an orientation somewhat removed from software engineering, i.e.,
from a statistical perspective. The panel's charge therefore was to
convene a workshop to identify statistical software engineering
techniques that could have applicability to DoD systems in
development.
|
Statistics, Testing, and Defense Acquisition - Background Papers (Paperback)
National Research Council, Division of Behavioral and Social Sciences and Education, Commission on Behavioral and Social Sciences and Education, Panel on Statistical Methods for Testing and Evaluating Defense Systems; Edited by John E. Rolph, …
|
R1,477
Discovery Miles 14 770
|
Ships in 12 - 17 working days
|
The Panel on Statistical Methods for Testing and Evaluating Defense
Systems had a broad mandate-to examine the use of statistics in
conjunction with defense testing. This involved examining methods
for software testing, reliability test planning and estimation,
validation of modeling and simulation, and use of modem techniques
for experimental design. Given the breadth of these areas,
including the great variety of applications and special issues that
arise, making a contribution in each of these areas required that
the Panel's work and recommendations be at a relatively general
level. However, a variety of more specific research issues were
either brought to the Panel's attention by members of the test and
acquisition community, e.g., what was referred to as Dubin's
challenge (addressed in the Panel's interim report), or were
identified by members of the panel. In many of these cases the
panel thought that a more in-depth analysis or a more detailed
application of suggestions or recommendations made by the Panel
would either be useful as input to its deliberations or could be
used to help communicate more individual views of members of the
Panel to the defense test community. This resulted in several
research efforts. Given various criteria, especially immediate
relevance to the test and acquisition community, the Panel has
decided to make available three technical or background papers,
each authored by a Panel member jointly with a colleague. These
papers are individual contributions and are not a consensus product
of the Panel; however, the Panel has drawn from these papers in
preparation of its final report: Statistics, Testing, and Defense
Acquisition. The Panel has found each of these papers to be
extremely useful and they are strongly recommended to readers of
the Panel's final report. Table of Contents Front Matter Strategic
Information Generation and Transmission: The Evolution of
Institutions in DoD Operational Testing On the Performance of
Weibull Life Tests Based on Exponential Life Testing Designs
Application of Statistical Science to Testing and Evaluating
Software Intensive Systems
|
Envisioning the 2020 Census (Paperback)
National Research Council, Division of Behavioral and Social Sciences and Education, Committee on National Statistics, Panel on the Design of the 2010 Census Program of Evaluations and Experiments; Edited by Constance F Citro, …
|
R2,495
Discovery Miles 24 950
|
Ships in 12 - 17 working days
|
Planning for the 2020 census is already beginning. This book from
the National Research Council examines several aspects of census
planning, including questionnaire design, address updating,
non-response follow-up, coverage follow-up, de-duplication of
housing units and residents, editing and imputation procedures, and
several other census operations. This book recommends that the
Census Bureau overhaul its approach to research and development.
The report urges the Bureau to set cost and quality goals for the
2020 and future censuses, improving efficiency by taking advantage
of new technologies. Table of Contents Front Matter Part I: Final
Report Summary 1 Introduction 2 Planning the 2020 Census: Cost and
Quality 3 Census Bureau Research, Past and Present 4 Revitalizing
Census Research and Development Appendix A: Past Census Research
Programs Appendix B: 2010 Census Program of Evaluations and
Experiments Part II: Interim Report: Experimentation and Evaluation
in the 2010 Census (December 7, 2007) Executive Summary 1
Introduction 2 Initial Views on 2010 Census Experiments 3 Initial
Views on 2010 Census Evaluations 4 Considerations for the 2010
Census Appendix A: The Census Bureau's Suggested Topics for
Research Appendix B: Internet Response Options in Selected
Population Censuses Part III: Letter Report (February 19, 2009)
Letter Report References Biographical Sketches of Panel Members and
Staff Committee on National Statistics
National Patterns of R&D Resources is an annual report issued
by the National Center for Science and Engineering Statistics
(NCSES) of the National Science Foundation, which provides a
national view of current 'patterns' in funding of R&D
activities in government, industry, academia, federally funded
research and development centers, and non-profits. Total R&D
funds are broken out at the national level by type of provider,
type of recipient, and whether the R&D is basic, applied, or
developmental. These patterns are compared both longitudinally
versus historical R&D amounts, and internationally. This report
series, which is based on input from several censuses and surveys,
is used to formulate policies that, e.g., might increase incentives
to support different types, sources, or recipients of R&D than
is currently the case. To communicate these R&D patterns, each
report is composed of a set of tabulations of national R&D
disaggregated by type of donor, type of recipient, and type of
R&D. While this satisfies many key user groups, the question
was whether some modifications of the report could attract a wider
user community and at the same time provide more useful information
for current users. National Patterns of R&D Resources: Future
Directions for Content and Methods addresses the following
questions: (1) what additional topics and tabulations could be
presented without modifying the current portfolio of R&D
censuses and surveys, (2) what additional topics and tabulations
might be presented by expanding these current data collections, (3)
what could be done to enhance international comparability of the
tabulations, (4) since much of the information on non-profit
R&D providers and recipients is estimated from 15 year-old
data, what impact might this be having on the quality of the
associated National Patterns tabulations, (5) what statistical
models could be used to support the issuance R&D estimates at
state-level and geographic regions below the national level, (6)
what use could be made from the recent development of
administrative sources of R&D information, and finally, (7)
what graphical tools could be added to the current tabulations to
enhance the communication of R&D patterns to the users of this
series of publications. Table of Contents Front Matter 1
Introduction 2 What Is *National Patterns*? 3 Users' Needs 4
Statistical Models and Administrative Records as Supplements to
Surveys 5 Small-Area Estimation 6 Presentation of Information in
*National Patterns* References Appendix A: Acronyms and
Abbreviations Appendix B: Workshop Agenda and Participants Appendix
C: Biographical Sketches of Steering Committee Members and Workshop
Presenters Committee on National Statistics
|
|