The dialogue-based relation extraction (DialogRE) task aims to predict the relations between argument pairs that appear in dialogue. Most previous studies utilize fine-tuning pre-trained language models (PLMs) only with extensive features to supplement the low information density of the dialogue by multiple speakers. To effectively exploit inherent knowledge of PLMs without extra layers and consider scattered semantic cues on the relation between the arguments, we propose a Guiding model with RelAtional Semantics using Prompt (GRASP). We adopt a prompt-based fine-tuning approach and capture relational semantic clues of a given dialogue with 1) an argument-aware prompt marker strategy and 2) the relational clue detection task. In the experiments, GRASP achieves state-of-the-art performance in terms of both F1 and F1c scores on a DialogRE dataset even though our method only leverages PLMs without adding any extra layers.
翻译:以对话为基础的关系提取(Dialogre)任务旨在预测对话中出现的对比对参数之间的关系。以往的大多数研究都使用精细调整预先训练的语言模型(PLM),但具有广泛的特点,以补充多个发言者对话中低的信息密度。为了有效地利用PLM的固有知识,而没有额外的层层,并考虑关于这些论点之间关系的零散的语义提示,我们提出了一个使用快速(GRASP)的RelAtional语义学指导模型。我们采用了基于即时的微调方法,并捕捉了特定对话的关联语义学线索,其中1个为辨识参数的快速标记战略,2个为关系线索探测任务。在实验中,GRASP在DIRE数据集上取得了最新的F1和F1c成绩,尽管我们的方法只利用PLMs而不添加任何额外的层。