We ran a study on engagement and achievement for a first year undergraduate programming module which used an online learning environment containing tasks which generate automated feedback. Students could also access human feedback from traditional labs. We gathered quantitative data on engagement and achievement which allowed us to split the cohort into 6 groups. We then ran interviews with students after the end of the module to produce qualitative data on perceptions of what feedback is, how useful it is, the uses made of it, and how it bears on engagement. A general finding was that human and automated feedback are different but complementary. However there are different feedback needs by group. Our findings imply: (1) that a blended human-automated feedback approach improves engagement; and (2) that this approach needs to be differentiated according to type of student. We give implications for the design of feedback for programming modules.
翻译:我们在第一年的本科生编程模块中开展了一项关于参与和成就的研究,该模块使用在线学习环境,包含自动反馈的任务。学生还可以从传统实验室获取人类反馈。我们收集了参与和成就的定量数据,使我们能够将组群分成6个组。我们随后在模块结束后与学生进行了访谈,以产生定性数据,说明对什么是反馈、什么是有用的、反馈的用途以及它如何对参与的影响。一个一般性的发现是,人和自动反馈是不同的,但互为补充。然而,各团体有不同的反馈需求。我们的调查结果表明:(1) 混合的人类自动反馈方法可以改善参与;(2) 这种方法需要根据学生的类型加以区分。我们给设计程序模块反馈带来影响。