Prerequisite chain learning helps people acquire new knowledge efficiently. While people may quickly determine learning paths over concepts in a domain, finding such paths in other domains can be challenging. We introduce Domain-Adversarial Variational Graph Autoencoders (DAVGAE) to solve this cross-domain prerequisite chain learning task efficiently. Our novel model consists of a variational graph autoencoder (VGAE) and a domain discriminator. The VGAE is trained to predict concept relations through link prediction, while the domain discriminator takes both source and target domain data as input and is trained to predict domain labels. Most importantly, this method only needs simple homogeneous graphs as input, compared with the current state-of-the-art model. We evaluate our model on the LectureBankCD dataset, and results show that our model outperforms recent graph-based benchmarks while using only 1/10 of graph scale and 1/3 computation time.
翻译:预设链式学习有助于人们高效地获得新知识。 虽然人们可以快速地决定一个领域的概念的学习路径, 但是在其它领域找到这样的路径可能具有挑战性。 我们引入了“ 域- 地对流图自动转换图” (DAVGAE), 以高效地解决这一跨域必备的链式学习任务。 我们的新模式由变异图形自动编码器(VGAE) 和一个域歧视器组成。 VGAE 接受培训, 通过链接预测预测来预测概念关系, 而域区分器将源和目标域数据作为输入, 并被培训来预测域标签 。 最重要的是, 这个方法只需要简单的同质图形作为输入, 与当前最先进的模型相比较。 我们评估了我们在“ BankCD 讲” 数据集中的模型, 结果显示我们的模型在仅使用 1/ 10 和 1/ 计算时间 时, 超过了最近的图形基准 。