Implicit event argument extraction (EAE) aims to identify arguments that could scatter over the document. Most previous work focuses on learning the direct relations between arguments and the given trigger, while the implicit relations with long-range dependency are not well studied. Moreover, recent neural network based approaches rely on a large amount of labeled data for training, which is unavailable due to the high labelling cost. In this paper, we propose a Curriculum learning based Prompt tuning (CUP) approach, which resolves implicit EAE by four learning stages. The stages are defined according to the relations with the trigger node in a semantic graph, which well captures the long-range dependency between arguments and the trigger. In addition, we integrate a prompt-based encoder-decoder model to elicit related knowledge from pre-trained language models (PLMs) in each stage, where the prompt templates are adapted with the learning progress to enhance the reasoning for arguments. Experimental results on two well-known benchmark datasets show the great advantages of our proposed approach. In particular, we outperform the state-of-the-art models in both fully-supervised and low-data scenarios.
翻译:隐性事件参数提取 (EAE) 旨在确定可能分散在文档上的参数。 先前的工作大多侧重于学习参数和给定触发点之间的直接关系,而没有很好地研究长期依赖的隐含关系。 此外,最近基于神经网络的方法依靠大量标签数据进行培训,而由于标签成本高,这些数据无法使用。 在本文件中,我们建议采用基于课程的快速调试(CUP) 方法,该方法通过四个学习阶段解决隐含的EAE 。 各个阶段是根据语义图中与触发节点的关系来定义的,这很好地反映了参数和触发点之间的长期依赖关系。 此外,我们在每个阶段都采用基于快速的编码解码器(PLMs)模式,以便从事先培训的语言模型(PLMs)中获取相关知识。 迅速的模板随着学习进展进行调整,以加强论证的推理。 两个广为人知的基准数据集的实验结果显示了我们拟议方法的巨大优势。 特别是,我们在完全超视及低数据情景中超越了状态模型。