We study knowledge-grounded dialogue generation with pre-trained language models. To leverage the redundant external knowledge under capacity constraint, we propose equipping response generation defined by a pre-trained language model with a knowledge selection module, and an unsupervised approach to jointly optimizing knowledge selection and response generation with unlabeled dialogues. Empirical results on two benchmarks indicate that our model can significantly outperform state-of-the-art methods in both automatic evaluation and human judgment.
翻译:为了在能力制约下利用多余的外部知识,我们提议用一个知识选择模块来装备由事先培训的语言模式定义的应对生成,并采用一个无人监督的方法来联合优化知识选择和应对生成,同时进行没有标签的对话。 两个基准的经验结果表明,我们的模型在自动评估和人类判断方面可以大大优于最先进的方法。