Students sometimes produce code that works but that its author does not comprehend. For example, a student may apply a poorly-understood code template, stumble upon a working solution through trial and error, or plagiarize. Similarly, passing an automated functional assessment does not guarantee that the student understands their code. One way to tackle these issues is to probe students' comprehension by asking them questions about their own programs. We propose an approach to automatically generate questions about student-written program code. We moreover propose a use case for such questions in the context of automatic assessment systems: after a student's program passes unit tests, the system poses questions to the student about the code. We suggest that these questions can enhance assessment systems, deepen student learning by acting as self-explanation prompts, and provide a window into students' program comprehension. This discussion paper sets an agenda for future technical development and empirical research on the topic.
翻译:学生有时会生成实用但作者无法理解的代码。 例如,学生可能会应用一个不为人知的代码模板,通过试验和错误来寻找工作解决方案,或者进行假冒。 同样,通过自动功能评估并不能保证学生理解他们的代码。 解决这些问题的方法之一是询问学生对自身程序的理解。 我们建议了一种方法来自动生成有关学生写成程序代码的问题。 此外,我们建议了在自动评估系统中使用这类问题:在学生方案通过单元测试后,该系统会向学生提出有关代码的问题。我们建议,这些问题可以加强评估系统,通过自我规划来深化学生的学习,为学生方案的理解提供一个窗口。本讨论文件提出了未来技术开发和关于该主题的经验研究的议程。