Pretrained language models (LMs) do not capture factual knowledge very well. This has led to the development of a number of knowledge integration (KI) methods which aim to incorporate external knowledge into pretrained LMs. Even though KI methods show some performance gains over vanilla LMs, the inner-workings of these methods are not well-understood. For instance, it is unclear how and what kind of knowledge is effectively integrated into these models and if such integration may lead to catastrophic forgetting of already learned knowledge. This paper revisits the KI process in these models with an information-theoretic view and shows that KI can be interpreted using a graph convolution operation. We propose a probe model called \textit{Graph Convolution Simulator} (GCS) for interpreting knowledge-enhanced LMs and exposing what kind of knowledge is integrated into these models. We conduct experiments to verify that our GCS can indeed be used to correctly interpret the KI process, and we use it to analyze two well-known knowledge-enhanced LMs: ERNIE and K-Adapter, and find that only a small amount of factual knowledge is integrated in them. We stratify knowledge in terms of various relation types and find that ERNIE and K-Adapter integrate different kinds of knowledge to different extent. Our analysis also shows that simply increasing the size of the KI corpus may not lead to better KI; fundamental advances may be needed.
翻译:受过训练的语言模型(LMS)没有很好地捕捉到事实知识。这导致开发了一些知识整合方法(KI),旨在将外部知识纳入经过训练的LMS。即使KI方法显示比香草LM取得了一定的绩效,但这些方法的内部工作并没有很好地理解。例如,不清楚如何和何种知识被有效地融入这些模型,以及这种整合是否可能导致对已经学到的知识的灾难性的遗忘。本文用信息理论视角对这些模型中的KI进程进行了回顾,并表明KI可以使用图表演化操作来解释。我们提出了一个名为\ textit{Graph Convoluction Simulator}(GCS)的探测模型,用于解释知识强化的LMS,并揭示将何种知识融入这些模型。我们进行实验,以核实我们的GCS确实可以被用来正确解释已经学到的知识,我们用它来分析两个众所周知的知识强化的LMs:ERNIE和K-Adapter,我们建议使用一个叫做“GLIA”的探索模型,而我们只是将各种基本知识的少量整合起来。