In higher education courses, peer assessment activities are common for keeping students engaged during presentations. Defining precisely how students assess the work of others requires careful consideration. Asking the student for numeric grades is the most common method. However, students tend to assign high grades to most projects. Aggregating peer assessments, therefore, results in all projects receiving the same grade. Moreover, students might strategically assign low grades to the projects of others so that their projects will shine. Asking students to order all projects from best to worst imposes a high cognitive load on them, as studies have shown that people find it difficult to order more than a handful of items. To address these issues, we propose a novel peer rating model, R2R, consisting of (a) an algorithm that elicits student assessments and (b) a protocol for aggregating grades to produce a single order. The algorithm asks students to evaluate projects and answer pairwise comparison queries. These are then aggregated into a ranking over the projects. $R2R$ was deployed and tested in a university course and showed promising results, including fewer ties between alternatives and a significant reduction in the communication load on students.
翻译:暂无翻译