Dialogue systems can leverage large pre-trained language models and knowledge to generate fluent and informative responses. However, these models are still prone to produce hallucinated responses not supported by the input source, which greatly hinders their application. The heterogeneity between external knowledge and dialogue context challenges representation learning and source integration, and further contributes to unfaithfulness. To handle this challenge and generate more faithful responses, this paper presents RHO ($\rho$) utilizing the representations of linked entities and relation predicates from a knowledge graph (KG). We propose (1) local knowledge grounding to combine textual embeddings with the corresponding KG embeddings; and (2) global knowledge grounding to equip RHO with multi-hop reasoning abilities via the attention mechanism. In addition, we devise a response re-ranking technique based on walks over KG sub-graphs for better conversational reasoning. Experimental results on OpenDialKG show that our approach significantly outperforms state-of-the-art methods on both automatic and human evaluation by a large margin, especially in hallucination reduction (17.54% in FeQA).
翻译:然而,这些模型仍然容易产生没有投入源支持的致幻反应,这严重阻碍了它们的应用。外部知识和对话背景的异质性对代表性学习和源集整合提出了挑战,并进一步助长了不信仰的整合。为了应对这一挑战并产生更忠实的反应,本文展示了RHO($\rho$),它利用了来自知识图(KG)的关联实体和关系上游的表述。我们提议:(1) 当地知识基础,将文字嵌入与相应的KG嵌入结合起来;(2) 全球知识基础,通过关注机制使ROHO具备多手推理能力。此外,我们设计了一种基于KG子图的应对重新排列技术,以更好地对话推理。OpenDialKG的实验结果显示,我们的方法大大超越了自动和人类评价方面的最新方法,特别是在幻觉减少方面(FQA中的17.54% )。