Extractive Question Answering (EQA) is one of the most important tasks in Machine Reading Comprehension (MRC), which can be solved by fine-tuning the span selecting heads of Pre-trained Language Models (PLMs). However, most existing approaches for MRC may perform poorly in the few-shot learning scenario. To solve this issue, we propose a novel framework named Knowledge Enhanced Contrastive Prompt-tuning (KECP). Instead of adding pointer heads to PLMs, we introduce a seminal paradigm for EQA that transform the task into a non-autoregressive Masked Language Modeling (MLM) generation problem. Simultaneously, rich semantics from the external knowledge base (KB) and the passage context are support for enhancing the representations of the query. In addition, to boost the performance of PLMs, we jointly train the model by the MLM and contrastive learning objectives. Experiments on multiple benchmarks demonstrate that our method consistently outperforms state-of-the-art approaches in few-shot settings by a large margin.
翻译:抽取问题解答(EQA)是机器阅读理解(MRC)中最重要的任务之一,可以通过微调预选语言模型(PLM)的负责人来解决这个问题。然而,在微小的学习情景下,MRC的大多数现有方法可能表现不佳。为了解决这个问题,我们提议了一个名为“知识增强对抗性快速调控(KECP)”的新框架。我们不向PLMS添加指针头,而是为EQA引入一个开创性范例,将任务转化为非侵略性隐蔽语言模型(MLMM)生成问题。同时,外部知识库(KB)和通道环境的丰富的语义支持加强了查询的表达方式。此外,为了提高MLMS的性能,我们联合用MLM和对比性学习目标来培训模型。对多个基准的实验表明,我们的方法始终超越了少数场景环境中的状态方法。