While pre-trained language models (PTLMs) have achieved noticeable success on many NLP tasks, they still struggle for tasks that require event temporal reasoning, which is essential for event-centric applications. We present a continual pre-training approach that equips PTLMs with targeted knowledge about event temporal relations. We design self-supervised learning objectives to recover masked-out event and temporal indicators and to discriminate sentences from their corrupted counterparts (where event or temporal indicators got replaced). By further pre-training a PTLM with these objectives jointly, we reinforce its attention to event and temporal information, yielding enhanced capability on event temporal reasoning. This effective continual pre-training framework for event temporal reasoning (ECONET) improves the PTLMs' fine-tuning performances across five relation extraction and question answering tasks and achieves new or on-par state-of-the-art performances in most of our downstream tasks.
翻译:虽然经过培训的语文模式(PTLMs)在许多NLP任务上取得了显著成功,但是它们仍然在为需要时间推理的任务而挣扎,而时间推理是事件中心应用所必不可少的。我们提出了一个持续的培训前方法,使PTLMs能够有针对性地了解事件时间关系。我们设计了自我监督的学习目标,以恢复隐蔽事件和时间指标,并区别其腐败的对应人员(在事件或时间指标被取代的情况下)的刑期。通过进一步为PTLM组织与这些目标共同培训前,我们加强了对事件和时间信息的关注,提高了事件时间推理的能力。这个有效的事件时间推理(EONET)持续培训前框架改善了PTLMs在五个相关提取和回答问题的任务方面的微调业绩,并在我们大多数下游任务中实现了新的或不同的最新表现。