To assess the knowledge proficiency of a learner, multiple choice question is an efficient and widespread form in standard tests. However, the composition of the multiple choice question, especially the construction of distractors is quite challenging. The distractors are required to both incorrect and plausible enough to confuse the learners who did not master the knowledge. Currently, the distractors are generated by domain experts which are both expensive and time-consuming. This urges the emergence of automatic distractor generation, which can benefit various standard tests in a wide range of domains. In this paper, we propose a question and answer guided distractor generation (EDGE) framework to automate distractor generation. EDGE consists of three major modules: (1) the Reforming Question Module and the Reforming Passage Module apply gate layers to guarantee the inherent incorrectness of the generated distractors; (2) the Distractor Generator Module applies attention mechanism to control the level of plausibility. Experimental results on a large-scale public dataset demonstrate that our model significantly outperforms existing models and achieves a new state-of-the-art.
翻译:为了评估学习者的知识熟练程度,在标准测试中,多重选择问题是一种高效和广泛的形式。然而,多重选择问题的组成,特别是分流器的构造是相当具有挑战性的。分流器必须具有不正确和可信的程度,以便混淆没有掌握知识的学习者。目前,分流器是由成本昂贵和耗时的域专家产生的。这促使自动分散器产生,这可以在广泛的领域进行各种标准测试。在本文中,我们提议一个问答式分散器生成框架(EDGE),以自动生成分散器。EDGE由三个主要模块组成:(1) 改革问题模块和改革过道模块应用门层,以保证生成的分流器固有的不正确性;(2) 抽引器生成模块应用关注机制来控制合理性水平。大规模公共数据集的实验结果表明,我们的模型大大超越了现有模型,并实现了新的状态。