Most MOOC platforms either use simple schemes for aggregating peer grades, e.g., taking the mean or the median, or apply methodologies that increase students' workload considerably, such as calibrated peer review. To reduce the error between the instructor and students' aggregated scores in the simple schemes, without requiring demanding grading calibration phases, some proposals compute specific weights to compute a weighted aggregation of the peer grades. In this work, and in contrast to most previous studies, we analyse the use of students' engagement and performance measures to compute personalized weights and study the validity of the aggregated scores produced by these common functions, mean and median, together with two other from the information retrieval field, namely the geometric and harmonic means. To test this procedure we have analysed data from a MOOC about Philosophy. The course had 1059 students registered, and 91 participated in a peer review process that consisted in writing an essay and rating three of their peers using a rubric. We calculated and compared the aggregation scores obtained using weighted and non-weighted versions. Our results show that the validity of the aggregated scores and their correlation with the instructors grades can be improved in relation to peer grading, when using the median and weights are computed according to students' performance in chapter tests.
翻译:暂无翻译