Frame semantic parsing is a complex problem which includes multiple underlying subtasks. Recent approaches have employed joint learning of subtasks (such as predicate and argument detection), and multi-task learning of related tasks (such as syntactic and semantic parsing). In this paper, we explore multi-task learning of all subtasks with transformer-based models. We show that a purely generative encoder-decoder architecture handily beats the previous state of the art in FrameNet 1.7 parsing, and that a mixed decoding multi-task approach achieves even better performance. Finally, we show that the multi-task model also outperforms recent state of the art systems for PropBank SRL parsing on the CoNLL 2012 benchmark.
翻译:框架语义解析是一个复杂的问题, 包括多个基础子任务 。 近期的方法采用联合学习子任务( 如上游和参数检测) 和多任务相关任务( 如合成法和语义解析) 。 在本文中, 我们探索以变压器为基础的模型对所有子任务进行多任务学习。 我们显示, 一个纯基因化的编码器解码器结构手动战胜了框架网1.7 解码法中以往的艺术状态, 混合解码多任务法取得更好的性能。 最后, 我们显示, 多任务模型也超越了Propbank SRL 2012 基准中Propbank SRL 解析的艺术系统的最新状态 。