Eliciting knowledge contained in language models via prompt-based learning has shown great potential in many natural language processing tasks, such as text classification and generation. Whereas, the applications for more complex tasks such as event extraction are less studied, since the design of prompt is not straightforward due to the complicated types and arguments. In this paper, we explore to elicit the knowledge from pre-trained language models for event trigger detection and argument extraction. Specifically, we present various joint trigger/argument prompt methods, which can elicit more complementary knowledge by modeling the interactions between different triggers or arguments. The experimental results on the benchmark dataset, namely ACE2005, show the great advantages of our proposed approach. In particular, our approach is superior to the recent advanced methods in the few-shot scenario where only a few samples are used for training.
翻译:语言模型中包含的通过速成学习的埃利2推学知识在许多自然语言处理任务(如文本分类和生成)中显示出巨大的潜力,而对于诸如事件提取等更复杂任务的应用研究较少,因为由于复杂类型和论点,快速设计并不简单。在本文中,我们探索从经过培训的触发事件探测和论证提取语言模型中获取知识。具体地说,我们提出了各种联合触发/论证快速方法,这些方法可以通过模拟不同触发因素或论点之间的互动而获得更多的互补知识。基准数据集的实验结果,即ACE2005,显示了我们拟议方法的巨大优势。特别是,我们的方法优于最近采用的方法,在少数情况下,只有少量样本用于培训。