Most learners fail to develop deep text comprehension when reading textbooks passively. Posing questions about what learners have read is a well-established way of fostering their text comprehension. However, many textbooks lack self-assessment questions because authoring them is timeconsuming and expensive. Automatic question generators may alleviate this scarcity by generating sound pedagogical questions. However, generating questions automatically poses linguistic and pedagogical challenges. What should we ask? And, how do we phrase the question automatically? We address those challenges with an automatic question generator grounded in learning theory. The paper introduces a novel pedagogically meaningful content selection mechanism to find question-worthy sentences and answers in arbitrary textbook contents. We conducted an empirical evaluation study with educational experts, annotating 150 generated questions in six different domains. Results indicate a high linguistic quality of the generated questions. Furthermore, the evaluation results imply that the majority of the generated questions inquire central information related to the given text and may foster text comprehension in specific learning scenarios.
翻译:多数学习者在阅读教科书时都无法形成深层次的文字理解。 关于学习者读过什么的疑问是培养其文字理解的既定方法。 但是,许多教科书缺乏自我评估问题,因为编写这些教科书耗费时间而且费用昂贵。 自动问题生成者可以通过产生良好的教学问题来缓解这种稀缺现象。 但是,产生问题会自动地带来语言和教育挑战。 我们应该问什么? 我们如何用一个基于学习理论的自动问题生成器来自动表达这个问题? 该文件引入了一个新的具有教学意义的内容选择机制,以找到值得提问的句子和任意教科书内容的答案。 我们与教育专家进行了经验性评估研究,在六个不同领域提出了150个问题。 研究结果表明产生的问题的语言质量很高。 此外,评价结果还表明,大多数产生的问题都询问了与给定文本有关的中央信息,并可能在具体的学习情景中促进对文本的理解。