International Journal of Computers and Applications
2003, Volume 25, Number 3, 1-8.
Learning Object Evaluation: Computer-Mediated Collaboration and Inter-Rater Reliability
John Vargo1, John C. Nesbit2, Karen Belfer2 and Anne Archambault3
1University of Canterbury
2Simon Fraser University
3Microsoft Corporation
Abstract
Learning objects offer increased ability to share learning resources
so that system-wide production costs can be reduced. But how can
users select from a set of similar learning objects in a repository
and be assured of quality? This article reviews recent developments
in the establishment of learning object repositories and metadata
standards, and presents a formative reliability analysis of an online,
collaborative method for evaluating quality of learning objects. The
method uses a 10-item Learning Object Review Instrument (LORI)
within a Convergent Participation evaluation model that brings
together instructional designers, media developers, and instructors.
The inter-rater reliability analysis of 12 raters evaluating eight
learning objects identified specific items in LORI that require further
development. Overall, the collaborative process substantially
increased the reliability and validity of aggregate learning object ratings.
The study concludes with specific recommendations including
changes to LORI items, a rater training process, and requirements
for selecting an evaluation team.
Keywords
learning objects, eLearning, collaborative, design, reliability,
evaluation, web-based education.
Citation
Vargo, J., Nesbit, J. C., Belfer, K., & Archambault, A. (2003). Learning object evaluation: Computer mediated collaboration and inter-rater reliability. International Journal of Computers and Applications, 25(3), 198-205.
Full text (pdf)
|