|
Showing 1 - 22 of
22 matches in All Departments
The general theme of this book is to present innovative
psychometric modeling and methods. In particular, this book
includes research and successful examples of modeling techniques
for new data sources from digital assessments, such as eye-tracking
data, hint uses, and process data from game-based assessments. In
addition, innovative psychometric modeling approaches, such as
graphical models, item tree models, network analysis, and cognitive
diagnostic models, are included. Chapters 1, 2, 4 and 6 are about
psychometric models and methods for learning analytics. The first
two chapters focus on advanced cognitive diagnostic models for
tracking learning and the improvement of attribute classification
accuracy. Chapter 4 demonstrates the use of network analysis for
learning analytics. Chapter 6 introduces the conjunctive root
causes model for the understanding of prerequisite skills in
learning. Chapters 3, 5, 8, 9 are about innovative psychometric
techniques to model process data. Specifically, Chapters 3 and 5
illustrate the usage of generalized linear mixed effect models and
item tree models to analyze eye-tracking data. Chapter 8 discusses
the modeling approach of hint uses and response accuracy in
learning environment. Chapter 9 demonstrates the identification of
observable outcomes in the game-based assessments. Chapters 7 and
10 introduce innovative latent variable modeling approaches,
including the graphical and generalized linear model approach and
the dynamic modeling approach. In summary, the book includes
theoretical, methodological, and applied research and practices
that serve as the foundation for future development. These chapters
provide illustrations of efforts to model and analyze multiple data
sources from digital assessments. When computer-based assessments
are emerging and evolving, it is important that researchers can
expand and improve the methods for modeling and analyzing new data
sources. This book provides a useful resource to researchers who
are interested in the development of psychometric methods to solve
issues in this digital assessment age.
The general theme of this book is to encourage the use of relevant
methodology in data mining which is or could be applied to the
interplay of education, statistics and computer science to solve
psychometric issues and challenges in the new generation of
assessments. In addition to item response data, other data
collected in the process of assessment and learning will be
utilized to help solve psychometric challenges and facilitate
learning and other educational applications. Process data include
those collected or available for collection during the process of
assessment and instructional phase such as responding sequence
data, log files, the use of help features, the content of web
searches, etc. Some book chapters present the general exploration
of process data in large -scale assessment. Further, other chapters
also address how to integrate psychometrics and learning analytics
in assessment and survey, how to use data mining techniques for
security and cheating detection, how to use more assessment results
to facilitate student's learning and guide teacher's instructional
efforts. The book includes both theoretical and methodological
presentations that might guide the future in this area, as well as
illustrations of efforts to implement big data analytics that might
be instructive to those in the field of learning and psychometrics.
The context of the effort is diverse, including K-12, higher
education, financial planning, and survey utilization. It is hoped
that readers can learn from different disciplines, especially those
who are specialized in assessment, would be critical to expand the
ideas of what we can do with data analytics for informing
assessment practices.
The new generation of tests is faced with new challenges. In the
K?12 setting, the new learning targets are intended to assess
higher?order thinking skills and prepare students to be ready for
college and career and to keep American students competitive with
their international peers. In addition, the new generation of state
tests requires the use of technology in item delivery and embedding
assessment in real?world, authentic, situations. It further
requires accurate assessment of students at all ability levels. One
of the most important questions is how to maintain test fairness in
the new assessments with technology innovative items and technology
delivered tests. In the traditional testing programs such as
licensure and certification tests and college admission tests, test
fairness has constantly been a key psychometric issue in test
development and this continues to be the case with the national
testing programs. As test fairness needs to be addressed throughout
the whole process of test development, experts from state,
admission, and licensure tests will address test fairness
challenges in the new generation assessment. The book chapters
clarify misconceptions of test fairness including the use of
admission test results in cohort comparison, the use of
international assessment results in trend evaluation, whether
standardization and fairness necessarily mean uniformity when
test?takers have different cultural backgrounds, and whether
standardization can insure fairness. More technically, chapters
also address issues related to how compromised items and test
fairness are related to classification decisions, how accessibility
in item development and accommodation could be mingled with
technology, how to assess special populations with dyslexia, using
Blinder?Oaxaca Decomposition for differential item functioning
detection, and differential feature functioning in automated
scoring. Overall, this book addresses test fairness issues in state
assessment, college admission testing, international assessment,
and licensure tests. Fairness is discussed in the context of
culture and special populations. Further, fairness related to
performance assessment and automated scoring is a focus as well.
This book provides a very good source of information related to
test fairness issues in test development in the new generation of
assessment where technology is highly involved.
The new generation of tests is faced with new challenges. In the
K?12 setting, the new learning targets are intended to assess
higher?order thinking skills and prepare students to be ready for
college and career and to keep American students competitive with
their international peers. In addition, the new generation of state
tests requires the use of technology in item delivery and embedding
assessment in real?world, authentic, situations. It further
requires accurate assessment of students at all ability levels. One
of the most important questions is how to maintain test fairness in
the new assessments with technology innovative items and technology
delivered tests. In the traditional testing programs such as
licensure and certification tests and college admission tests, test
fairness has constantly been a key psychometric issue in test
development and this continues to be the case with the national
testing programs. As test fairness needs to be addressed throughout
the whole process of test development, experts from state,
admission, and licensure tests will address test fairness
challenges in the new generation assessment. The book chapters
clarify misconceptions of test fairness including the use of
admission test results in cohort comparison, the use of
international assessment results in trend evaluation, whether
standardization and fairness necessarily mean uniformity when
test?takers have different cultural backgrounds, and whether
standardization can insure fairness. More technically, chapters
also address issues related to how compromised items and test
fairness are related to classification decisions, how accessibility
in item development and accommodation could be mingled with
technology, how to assess special populations with dyslexia, using
Blinder?Oaxaca Decomposition for differential item functioning
detection, and differential feature functioning in automated
scoring. Overall, this book addresses test fairness issues in state
assessment, college admission testing, international assessment,
and licensure tests. Fairness is discussed in the context of
culture and special populations. Further, fairness related to
performance assessment and automated scoring is a focus as well.
This book provides a very good source of information related to
test fairness issues in test development in the new generation of
assessment where technology is highly involved.
Assessment innovation tied to technology is greatly needed in a
wide variety of assessment applications. This book adopts an
interdisciplinary perspective to learn from advances in developing
technology enhanced innovative assessments from multiple fields.
The book chapters address the development of virtual assessments
including game?based assessment, simulation?based assessment, and
narrative based assessment as well as how simulation and game based
assessments serve both formative and summative purposes. Further,
chapters address the critical challenge of integrating assessment
directly into the learning process so that teacher effectiveness
and student learning can be enhanced. Two chapters specifically
address the psychometric challenges related to innovative items.
One chapter talks about evaluating the psychometric properties of
innovative items while the other chapter presents a new
psychometric model for calibrating innovative items embedded in
multiple contexts. In addition, validity issues are addressed
related to technologyenhanced innovative assessment. It is hoped
that the book provides readers with rich and useful information
about the development of several types of virtual assessments from
multiple perspectives. The authors include experts from industry
where innovative items have been used for many years and experts
from research institutes and universities who have done pioneering
work related to developing innovative items with formative
applications to facilitate learning. In addition, expert advice has
been provided on validating such work.
The Race To The Top (RTTP) federal education policy fostered a new
generation of state tests. This policy advocated adopting common
core standards which set a higher level of learning targets for
students in the US K?12 education. These standards are intended to
assess higher order thinking skills and prepare students ready for
college and career. In the meantime, they are aligned with those
for international assessments which keep US students abreast of
their international peers. Furthermore, the new generation of state
tests requires the use of technology enhanced items to align
student assessments with student learning environment. Computer
technology is indispensable to accomplish this goal. Computer based
tests related to common core standards are different from previous
state computer based tests in two important aspects, one is that
the current version requires accurate assessment of students along
all ability levels and the other is that it promotes the use of an
efficient test delivery system, essentially the use of computerized
adaptive assessment in K?12 state testing programs. In addition to
providing summative information about student learning, the new
common core tests add formative assessment component in the whole
assessment system to provide timely feedback to students and
teachers during the process of student learning and teacher
instruction. As with its predecessor, the new assessment policy
also holds teachers and schools accountable for student learning.
With the requirements by the new federal education policy, states
formed two consortia: Partnership for Assessment of Readiness for
College and Careers (PARCC) and Smarter?Balanced Assessment
Consortium (SBAC) to develop assessments in alignment with the new
common core standards. This book is based on the presentations made
at the Thirteenth Annual Maryland Assessment Research Center's
Conference on "The Next Generation of Testing: Common Core
Standards, Smarter?Balanced, PARCC, and the Nationwide Testing
Movement". Experts from the consortia and nationwide overviewed the
intention, history and the current status of this nationwide
testing movement. Item development, test design, and transition
from old state tests to the new consortia tests are discussed. Test
scoring and reporting are specially highlighted in the book. The
challenges related to standard setting for the new test, especially
in the CAT environment and linking performance standards from state
tests with consortium tests were explored. The issues of utilizing
the consortium test results to evaluate students' college and
career readiness is another topic addressed in the book. The last
chapters address the critical issue of validity in the new
generation of state testing programs. Overall, this book presents
the latest status of the development of the two consortium
assessment systems. It addresses the most challenging issues
related to the next generation of state testing programs including
development of innovative items assessing higher order thinking
skills, scoring of such items, standard setting and linkage with
the old state specific standards, and validity issues. This edited
book provides a very good source of information related to the
consortium tests based on the common core standards.
The Race To The Top (RTTP) federal education policy fostered a new
generation of state tests. This policy advocated adopting common
core standards which set a higher level of learning targets for
students in the US K?12 education. These standards are intended to
assess higher order thinking skills and prepare students ready for
college and career. In the meantime, they are aligned with those
for international assessments which keep US students abreast of
their international peers. Furthermore, the new generation of state
tests requires the use of technology enhanced items to align
student assessments with student learning environment. Computer
technology is indispensable to accomplish this goal. Computer based
tests related to common core standards are different from previous
state computer based tests in two important aspects, one is that
the current version requires accurate assessment of students along
all ability levels and the other is that it promotes the use of an
efficient test delivery system, essentially the use of computerized
adaptive assessment in K?12 state testing programs. In addition to
providing summative information about student learning, the new
common core tests add formative assessment component in the whole
assessment system to provide timely feedback to students and
teachers during the process of student learning and teacher
instruction. As with its predecessor, the new assessment policy
also holds teachers and schools accountable for student learning.
With the requirements by the new federal education policy, states
formed two consortia: Partnership for Assessment of Readiness for
College and Careers (PARCC) and Smarter?Balanced Assessment
Consortium (SBAC) to develop assessments in alignment with the new
common core standards. This book is based on the presentations made
at the Thirteenth Annual Maryland Assessment Research Center's
Conference on "The Next Generation of Testing: Common Core
Standards, Smarter?Balanced, PARCC, and the Nationwide Testing
Movement". Experts from the consortia and nationwide overviewed the
intention, history and the current status of this nationwide
testing movement. Item development, test design, and transition
from old state tests to the new consortia tests are discussed. Test
scoring and reporting are specially highlighted in the book. The
challenges related to standard setting for the new test, especially
in the CAT environment and linking performance standards from state
tests with consortium tests were explored. The issues of utilizing
the consortium test results to evaluate students' college and
career readiness is another topic addressed in the book. The last
chapters address the critical issue of validity in the new
generation of state testing programs. Overall, this book presents
the latest status of the development of the two consortium
assessment systems. It addresses the most challenging issues
related to the next generation of state testing programs including
development of innovative items assessing higher order thinking
skills, scoring of such items, standard setting and linkage with
the old state specific standards, and validity issues. This edited
book provides a very good source of information related to the
consortium tests based on the common core standards.
Modelling student growth has been a federal policy requirement
under No Child Left Behind (NCLB). In addition to tracking student
growth, the latest Race To The Top (RTTP) federal education policy
stipulates the evaluation of teacher effectiveness from the
perspective of added value that teachers contribute to student
learning and growth. Student growth modelling and teacher
value-added modelling are complex. The complexity stems, in part,
from issues due to non-random assignment of students into classes
and schools, measurement error in students' achievement scores that
are utilized to evaluate the added value of teachers,
multidimensionality of the measured construct across multiple
grades, and the inclusion of covariates. National experts at the
Twelfth Annual Maryland Assessment Research Center's Conference on
"Value Added Modeling and Growth Modeling with Particular
Application to Teacher and School Effectiveness" present the latest
developments and methods to tackle these issues. This book includes
chapters based on these conference presentations. Further, the book
provides some answers to questions such as what makes a good growth
model? What criteria should be used in evaluating growth models?
How should outputs from growth models be utilized? How auxiliary
teacher information could be utilized to improve value added? How
multiple sources of student information could be accumulated to
estimate teacher effectiveness? Whether student-level and
school-level covariates should be included? And what are the
impacts of the potential heterogeneity of teacher effects across
students of different aptitudes or other differing characteristics
on growth modelling and teacher evaluation? Overall, this book
addresses reliability and validity issues in growth modelling and
value added modelling and presents the latest development in this
area. In addition, some persistent issues have been approached from
a new perspective. This edited volume provides a very good source
of information related to the current explorations in student
growth and teacher effectiveness evaluation.
Modelling student growth has been a federal policy requirement
under No Child Left Behind (NCLB). In addition to tracking student
growth, the latest Race To The Top (RTTP) federal education policy
stipulates the evaluation of teacher effectiveness from the
perspective of added value that teachers contribute to student
learning and growth. Student growth modelling and teacher
value-added modelling are complex. The complexity stems, in part,
from issues due to non-random assignment of students into classes
and schools, measurement error in students' achievement scores that
are utilized to evaluate the added value of teachers,
multidimensionality of the measured construct across multiple
grades, and the inclusion of covariates. National experts at the
Twelfth Annual Maryland Assessment Research Center's Conference on
"Value Added Modeling and Growth Modeling with Particular
Application to Teacher and School Effectiveness" present the latest
developments and methods to tackle these issues. This book includes
chapters based on these conference presentations. Further, the book
provides some answers to questions such as what makes a good growth
model? What criteria should be used in evaluating growth models?
How should outputs from growth models be utilized? How auxiliary
teacher information could be utilized to improve value added? How
multiple sources of student information could be accumulated to
estimate teacher effectiveness? Whether student-level and
school-level covariates should be included? And what are the
impacts of the potential heterogeneity of teacher effects across
students of different aptitudes or other differing characteristics
on growth modelling and teacher evaluation? Overall, this book
addresses reliability and validity issues in growth modelling and
value added modelling and presents the latest development in this
area. In addition, some persistent issues have been approached from
a new perspective. This edited volume provides a very good source
of information related to the current explorations in student
growth and teacher effectiveness evaluation.
This book focuses on interim and formative assessments as
distinguished from the more usual interest in summative assessment.
I was particularly interested in seeing what the experts have to
say about a full system of assessment. This book has particular
interest in what information a teacher, a school or even a state
could collect that monitors the progress of a student as he or she
learns. The authors were asked to think about assessing the effects
of teaching and learning throughout the student's participation in
the curriculum. This book is the product of a conference by the
Maryland Assessment Research Center for Education Success (MARCES)
with funding from the Maryland State Department of Education.
The Race To The Top program strongly advocates the use of computer
technology in assessments. It dramatically promotes computer-based
testing, linear or adaptive, in K-12 state assessment programs.
Moreover, assessment requirements driven by this federal initiative
exponentially increase the complexity in assessment design and test
development. This book provides readers with a review of the
history and basics of computer-based tests. It also offers a macro
perspective for designing such assessment systems in the K-12
setting as well as a micro perspective on new challenges such as
innovative items, scoring of such items, cognitive diagnosis, and
vertical scaling for growth modelling and value added approaches to
assessment. The editors' goal is to provide readers with necessary
information to create a smarter computer-based testing system by
following the advice and experience of experts from education as
well as other industries. This book is based on a conference
(http://marces.org/workshop.htm) held by the Maryland Assessment
Research Centre for Education Success. It presents multiple
perspectives including test vendors and state departments of
education, in designing and implementing a computer-based test in
the K-12 setting. The design and implementation of such a system
requires deliberate planning and thorough considerations. The
advice and experiences presented in this book serve as a guide to
practitioners and as a good source of information for quality
control. The technical issues discussed in this book are relatively
new and unique to K-12 large-scale computer-based testing programs,
especially due to the recent federal policy. Several chapters
provide possible solutions to psychometricians dealing with the
technical challenges related to innovative items, cognitive
diagnosis, and growth modelling in computer-based linear or
adaptive tests in the K-12 setting.
This book focuses on interim and formative assessments as
distinguished from the more usual interest in summative assessment.
I was particularly interested in seeing what the experts have to
say about a full system of assessment. This book has particular
interest in what information a teacher, a school or even a state
could collect that monitors the progress of a student as he or she
learns. The authors were asked to think about assessing the effects
of teaching and learning throughout the student's participation in
the curriculum. This book is the product of a conference by the
Maryland Assessment Research Center for Education Success (MARCES)
with funding from the Maryland State Department of Education.
The Race To The Top program strongly advocates the use of computer
technology in assessments. It dramatically promotes computer-based
testing, linear or adaptive, in K-12 state assessment programs.
Moreover, assessment requirements driven by this federal initiative
exponentially increase the complexity in assessment design and test
development. This book provides readers with a review of the
history and basics of computer-based tests. It also offers a macro
perspective for designing such assessment systems in the K-12
setting as well as a micro perspective on new challenges such as
innovative items, scoring of such items, cognitive diagnosis, and
vertical scaling for growth modelling and value added approaches to
assessment. The editors' goal is to provide readers with necessary
information to create a smarter computer-based testing system by
following the advice and experience of experts from education as
well as other industries. This book is based on a conference
(http://marces.org/workshop.htm) held by the Maryland Assessment
Research Centre for Education Success. It presents multiple
perspectives including test vendors and state departments of
education, in designing and implementing a computer-based test in
the K-12 setting. The design and implementation of such a system
requires deliberate planning and thorough considerations. The
advice and experiences presented in this book serve as a guide to
practitioners and as a good source of information for quality
control. The technical issues discussed in this book are relatively
new and unique to K-12 large-scale computer-based testing programs,
especially due to the recent federal policy. Several chapters
provide possible solutions to psychometricians dealing with the
technical challenges related to innovative items, cognitive
diagnosis, and growth modelling in computer-based linear or
adaptive tests in the K-12 setting.
Validity is widely held to be the most important criterion for an
assessment. Nevertheless, assessment professionals have disagreed
about the meaning of validity almost from the introduction of the
term as applied to testing about 100 years ago. Over the years, the
best and brightest people in assessment have contributed their
thinking to this problem and the fact that they have not agreed is
testimony to the complexity and importance of validity. Even today,
ways to define validity are being debated in the published
literature in the assessment profession. How can such a fundamental
concept be so controversial? This book brings focus to diverse
perspectives about validity. Its chapter authors were chosen
because of their expertise and because they differ from each other
in the ways they think about the validity construct. Its
introduction and ten chapters bridge both the theoretical and the
practical. Contributors include most prominent names in the field
of validity and their perspectives are at once cogent and
controversial. From these diverse and well-informed discussions,
the reader will gain a deep understanding of the core issues in
validity along with directions toward possible resolutions. The
debate that exists among these authors is a rich one that will
stimulate the reader's own understanding and opinion. Several
chapters are oriented more practically. Ways to study validity are
presented by professionals who blend current assessment practice
with new suggestions for what sort of evidence to develop and how
to generate the needed information. In addition they provide
examples of some of the options on how to present the validity
argument in the most effective ways. The initial chapter by the
editor is an effort to orient the reader as well as providing an
overview of the book. Bob Lissitz has provided a brief perspective
on each of the subsequent chapters as well as presenting a series
of questions regarding validation that the reader will want to try
to answer for themselves, as he or she reads through this book.
This book's topic is fundamental to assessment, its authors are
distinguished, and its scope is broad. It deserves to become
established as a fundamental reference on validity for years to
come.
Validity is widely held to be the most important criterion for an
assessment. Nevertheless, assessment professionals have disagreed
about the meaning of validity almost from the introduction of the
term as applied to testing about 100 years ago. Over the years, the
best and brightest people in assessment have contributed their
thinking to this problem and the fact that they have not agreed is
testimony to the complexity and importance of validity. Even today,
ways to define validity are being debated in the published
literature in the assessment profession. How can such a fundamental
concept be so controversial? This book brings focus to diverse
perspectives about validity. Its chapter authors were chosen
because of their expertise and because they differ from each other
in the ways they think about the validity construct. Its
introduction and ten chapters bridge both the theoretical and the
practical. Contributors include most prominent names in the field
of validity and their perspectives are at once cogent and
controversial. From these diverse and well-informed discussions,
the reader will gain a deep understanding of the core issues in
validity along with directions toward possible resolutions. The
debate that exists among these authors is a rich one that will
stimulate the reader's own understanding and opinion. Several
chapters are oriented more practically. Ways to study validity are
presented by professionals who blend current assessment practice
with new suggestions for what sort of evidence to develop and how
to generate the needed information. In addition they provide
examples of some of the options on how to present the validity
argument in the most effective ways. The initial chapter by the
editor is an effort to orient the reader as well as providing an
overview of the book. Bob Lissitz has provided a brief perspective
on each of the subsequent chapters as well as presenting a series
of questions regarding validation that the reader will want to try
to answer for themselves, as he or she reads through this book.
This book's topic is fundamental to assessment, its authors are
distinguished, and its scope is broad. It deserves to become
established as a fundamental reference on validity for years to
come.
The general theme of this book is to present the applications of
artificial intelligence (AI) in test development. In particular,
this book includes research and successful examples of using AI
technology in automated item generation, automated test assembly,
automated scoring, and computerized adaptive testing. By utilizing
artificial intelligence, the efficiency of item development, test
form construction, test delivery, and scoring could be dramatically
increased. Chapters on automated item generation offer different
perspectives related to generating a large number of items with
controlled psychometric properties including the latest development
of using machine learning methods. Automated scoring is illustrated
for different types of assessments such as speaking and writing
from both methodological aspects and practical considerations.
Further, automated test assembly is elaborated for the conventional
linear tests from both classical test theory and item response
theory perspectives. Item pool design and assembly for the
linear-on-the-fly tests elaborates more complications in practice
when test security is a big concern. Finally, several chapters
focus on computerized adaptive testing (CAT) at either item or
module levels. CAT is further illustrated as an effective approach
to increasing test-takers' engagement in testing. In summary, the
book includes both theoretical, methodological, and applied
research and practices that serve as the foundation for future
development. These chapters provide illustrations of efforts to
automate the process of test development. While some of these
automation processes have become common practices such as automated
test assembly, automated scoring, and computerized adaptive
testing, some others such as automated item generation calls for
more research and exploration. When new AI methods are emerging and
evolving, it is expected that researchers can expand and improve
the methods for automating different steps in test development to
enhance the automation features and practitioners can adopt quality
automation procedures to improve assessment practices.
The general theme of this book is to encourage the use of relevant
methodology in data mining which is or could be applied to the
interplay of education, statistics and computer science to solve
psychometric issues and challenges in the new generation of
assessments. In addition to item response data, other data
collected in the process of assessment and learning will be
utilized to help solve psychometric challenges and facilitate
learning and other educational applications. Process data include
those collected or available for collection during the process of
assessment and instructional phase such as responding sequence
data, log files, the use of help features, the content of web
searches, etc. Some book chapters present the general exploration
of process data in large -scale assessment. Further, other chapters
also address how to integrate psychometrics and learning analytics
in assessment and survey, how to use data mining techniques for
security and cheating detection, how to use more assessment results
to facilitate student's learning and guide teacher's instructional
efforts. The book includes both theoretical and methodological
presentations that might guide the future in this area, as well as
illustrations of efforts to implement big data analytics that might
be instructive to those in the field of learning and psychometrics.
The context of the effort is diverse, including K-12, higher
education, financial planning, and survey utilization. It is hoped
that readers can learn from different disciplines, especially those
who are specialized in assessment, would be critical to expand the
ideas of what we can do with data analytics for informing
assessment practices.
The general theme of this book is to present the applications of
artificial intelligence (AI) in test development. In particular,
this book includes research and successful examples of using AI
technology in automated item generation, automated test assembly,
automated scoring, and computerized adaptive testing. By utilizing
artificial intelligence, the efficiency of item development, test
form construction, test delivery, and scoring could be dramatically
increased. Chapters on automated item generation offer different
perspectives related to generating a large number of items with
controlled psychometric properties including the latest development
of using machine learning methods. Automated scoring is illustrated
for different types of assessments such as speaking and writing
from both methodological aspects and practical considerations.
Further, automated test assembly is elaborated for the conventional
linear tests from both classical test theory and item response
theory perspectives. Item pool design and assembly for the
linear-on-the-fly tests elaborates more complications in practice
when test security is a big concern. Finally, several chapters
focus on computerized adaptive testing (CAT) at either item or
module levels. CAT is further illustrated as an effective approach
to increasing test-takers' engagement in testing. In summary, the
book includes both theoretical, methodological, and applied
research and practices that serve as the foundation for future
development. These chapters provide illustrations of efforts to
automate the process of test development. While some of these
automation processes have become common practices such as automated
test assembly, automated scoring, and computerized adaptive
testing, some others such as automated item generation calls for
more research and exploration. When new AI methods are emerging and
evolving, it is expected that researchers can expand and improve
the methods for automating different steps in test development to
enhance the automation features and practitioners can adopt quality
automation procedures to improve assessment practices.
This book introduces theories and practices for using assessment
data to enhance learning and instruction. Topics include reshaping
the homework review process, iterative learning engineering,
learning progressions, learning maps, score report designing, the
use of psychosocial data, and the combination of adaptive testing
and adaptive learning. In addition, studies proposing new methods
and strategies, technical details about the collection and
maintenance of process data, and examples illustrating proposed
methods and software are included. Chapters 1, 4, 6, 8, and 9
discuss how to make valid interpretations of results and achieve
more efficient instructions from various sources of data. Chapters
3 and 7 propose and evaluate new methods to promote students'
learning by using evidence-based iterative learning engineering and
supporting the teachers' use of assessment data, respectively.
Chapter 2 provides technical details on the collection, storage,
and security protection of process data. Chapter 5 introduces
software for automating some aspects of developmental education and
the use of predictive modeling. Chapter 10 describes the barriers
to using psychosocial data for formative assessment purposes.
Chapter 11 describes a conceptual framework for adaptive learning
and testing and gives an example of a functional learning and
assessment system. In summary, the book includes comprehensive
perspectives of the recent development and challenges of using test
data for formative assessment purposes. The chapters provide
innovative theoretical frameworks, new perspectives on the use of
data with technology, and how to build new methods based on
existing theories. This book is a useful resource to researchers
who are interested in using data and technology to inform decision
making, facilitate instructional utility, and achieve better
learning outcomes.
This book introduces theories and practices for using assessment
data to enhance learning and instruction. Topics include reshaping
the homework review process, iterative learning engineering,
learning progressions, learning maps, score report designing, the
use of psychosocial data, and the combination of adaptive testing
and adaptive learning. In addition, studies proposing new methods
and strategies, technical details about the collection and
maintenance of process data, and examples illustrating proposed
methods and software are included. Chapters 1, 4, 6, 8, and 9
discuss how to make valid interpretations of results and achieve
more efficient instructions from various sources of data. Chapters
3 and 7 propose and evaluate new methods to promote students'
learning by using evidence-based iterative learning engineering and
supporting the teachers' use of assessment data, respectively.
Chapter 2 provides technical details on the collection, storage,
and security protection of process data. Chapter 5 introduces
software for automating some aspects of developmental education and
the use of predictive modeling. Chapter 10 describes the barriers
to using psychosocial data for formative assessment purposes.
Chapter 11 describes a conceptual framework for adaptive learning
and testing and gives an example of a functional learning and
assessment system. In summary, the book includes comprehensive
perspectives of the recent development and challenges of using test
data for formative assessment purposes. The chapters provide
innovative theoretical frameworks, new perspectives on the use of
data with technology, and how to build new methods based on
existing theories. This book is a useful resource to researchers
who are interested in using data and technology to inform decision
making, facilitate instructional utility, and achieve better
learning outcomes.
The general theme of this book is to present innovative
psychometric modeling and methods. In particular, this book
includes research and successful examples of modeling techniques
for new data sources from digital assessments, such as eye-tracking
data, hint uses, and process data from game-based assessments. In
addition, innovative psychometric modeling approaches, such as
graphical models, item tree models, network analysis, and cognitive
diagnostic models, are included. Chapters 1, 2, 4 and 6 are about
psychometric models and methods for learning analytics. The first
two chapters focus on advanced cognitive diagnostic models for
tracking learning and the improvement of attribute classification
accuracy. Chapter 4 demonstrates the use of network analysis for
learning analytics. Chapter 6 introduces the conjunctive root
causes model for the understanding of prerequisite skills in
learning. Chapters 3, 5, 8, 9 are about innovative psychometric
techniques to model process data. Specifically, Chapters 3 and 5
illustrate the usage of generalized linear mixed effect models and
item tree models to analyze eye-tracking data. Chapter 8 discusses
the modeling approach of hint uses and response accuracy in
learning environment. Chapter 9 demonstrates the identification of
observable outcomes in the game-based assessments. Chapters 7 and
10 introduce innovative latent variable modeling approaches,
including the graphical and generalized linear model approach and
the dynamic modeling approach. In summary, the book includes
theoretical, methodological, and applied research and practices
that serve as the foundation for future development. These chapters
provide illustrations of efforts to model and analyze multiple data
sources from digital assessments. When computer-based assessments
are emerging and evolving, it is important that researchers can
expand and improve the methods for modeling and analyzing new data
sources. This book provides a useful resource to researchers who
are interested in the development of psychometric methods to solve
issues in this digital assessment age.
Assessment innovation tied to technology is greatly needed in a
wide variety of assessment applications. This book adopts an
interdisciplinary perspective to learn from advances in developing
technology enhanced innovative assessments from multiple fields.
The book chapters address the development of virtual assessments
including game?based assessment, simulation?based assessment, and
narrative based assessment as well as how simulation and game based
assessments serve both formative and summative purposes. Further,
chapters address the critical challenge of integrating assessment
directly into the learning process so that teacher effectiveness
and student learning can be enhanced. Two chapters specifically
address the psychometric challenges related to innovative items.
One chapter talks about evaluating the psychometric properties of
innovative items while the other chapter presents a new
psychometric model for calibrating innovative items embedded in
multiple contexts. In addition, validity issues are addressed
related to technologyenhanced innovative assessment. It is hoped
that the book provides readers with rich and useful information
about the development of several types of virtual assessments from
multiple perspectives. The authors include experts from industry
where innovative items have been used for many years and experts
from research institutes and universities who have done pioneering
work related to developing innovative items with formative
applications to facilitate learning. In addition, expert advice has
been provided on validating such work.
|
You may like...
Loot
Nadine Gordimer
Paperback
(2)
R398
R330
Discovery Miles 3 300
|