Event argument extraction has long been studied as a sequential prediction problem with extractive-based methods, tackling each argument in isolation. Although recent work proposes generation-based methods to capture cross-argument dependency, they require generating and post-processing a complicated target sequence (template). Motivated by these observations and recent pretrained language models' capabilities of learning from demonstrations. We propose a retrieval-augmented generative QA model (R-GQA) for event argument extraction. It retrieves the most similar QA pair and augments it as prompt to the current example's context, then decodes the arguments as answers. Our approach outperforms substantially prior methods across various settings (i.e. fully supervised, domain transfer, and fewshot learning). Finally, we propose a clustering-based sampling strategy (JointEnc) and conduct a thorough analysis of how different strategies influence the few-shot learning performance. The implementations are available at https:// github.com/xinyadu/RGQA
翻译:长期以来,人们一直将事件参数提取作为与采掘方法相依的预测问题进行研究,孤立地处理每个论点。虽然最近的工作提出了以代为为基础的方法,以捕捉交叉争论依赖性,但需要生成和后处理一个复杂的目标序列(模板)。受这些观察和最近预先培训的语言模型从演示中学习的能力的驱动,我们建议了一种检索增强的质变基因模型(R-GQA),以进行事件论证提取。它检索了最相似的质变模型,并增加了该模型,使其与当前实例的上下文同步,然后解码了这些参数作为答案。我们的方法大大超越了各种环境(即完全监督、域转移和微小的学习)先前的方法。最后,我们提出了基于集群的抽样战略(联合Enc),并对不同战略如何影响少见的学习表现进行透彻分析。实施情况见https:// github.com/xinyadu/RGQA。