Knowledge-enhanced Pre-trained Language Model (PLM) has recently received significant attention, which aims to incorporate factual knowledge into PLMs. However, most existing methods modify the internal structures of fixed types of PLMs by stacking complicated modules, and introduce redundant and irrelevant factual knowledge from knowledge bases (KBs). In this paper, to address these problems, we introduce a seminal knowledge prompting paradigm and further propose a knowledge-prompting-based PLM framework KP-PLM. This framework can be flexibly combined with existing mainstream PLMs. Specifically, we first construct a knowledge sub-graph from KBs for each context. Then we design multiple continuous prompts rules and transform the knowledge sub-graph into natural language prompts. To further leverage the factual knowledge from these prompts, we propose two novel knowledge-aware self-supervised tasks including prompt relevance inspection and masked prompt modeling. Extensive experiments on multiple natural language understanding (NLU) tasks show the superiority of KP-PLM over other state-of-the-art methods in both full-resource and low-resource settings.
翻译:最近,通过将复杂模块堆叠起来,大多数现有方法通过知识库(KBs)引入冗余和不相关的事实知识,对固定类型的PLM内部结构进行修改,并从知识库(KBs)引入冗余和不相关的事实知识。在本文件中,为了解决这些问题,我们引入了一个开创性知识的促进模式,并进一步提议了一个基于知识的促进型PLM框架KP-PLM。这个框架可以与现有的主流PLMs灵活地结合起来。具体地说,我们首先从KBs中为每个背景建立一个知识分册。然后,我们设计多重连续的提示规则,并将知识分册转换为自然语言提示。为了进一步利用这些提示中的事实知识,我们提出了两项新的知识自控型任务,包括及时的相关性检查和隐蔽式快速建模。关于多种自然语言理解(NLU)任务的广泛实验表明KP-PLM在全资源和低资源环境中优于其他最先进的方法。