As Abstract Meaning Representation (AMR) implicitly involves compound semantic annotations, we hypothesize auxiliary tasks which are semantically or formally related can better enhance AMR parsing. We find that 1) Semantic role labeling (SRL) and dependency parsing (DP), would bring more performance gain than other tasks e.g. MT and summarization in the text-to-AMR transition even with much less data. 2) To make a better fit for AMR, data from auxiliary tasks should be properly "AMRized" to PseudoAMR before training. Knowledge from shallow level parsing tasks can be better transferred to AMR Parsing with structure transform. 3) Intermediate-task learning is a better paradigm to introduce auxiliary tasks to AMR parsing, compared to multitask learning. From an empirical perspective, we propose a principled method to involve auxiliary tasks to boost AMR parsing. Extensive experiments show that our method achieves new state-of-the-art performance on different benchmarks especially in topology-related scores.
翻译:由于抽象含义说明(AMR)隐含了复合语义说明,因此我们假设,具有语义或正式关联的辅助任务可以更好地加强对AMR的划分;我们发现,1)语义作用标签(SRL)和依赖性划分(DP)比其他任务(例如文本到AMR的过渡中MT和总结(即使数据少得多)带来更多的绩效收益;2)为了更好地适应AMR,从辅助任务获得的数据在培训前应该适当“MRA化”到PseudoAMR。浅层次区分任务的知识可以更好地转移到结构转型的AMR分层;3)中层任务学习是引入AMR分层的辅助任务的更好范例,而不是多任务学习。从经验角度看,我们提出了一种原则性方法,让辅助任务参与增强AMR的分级。广泛的实验表明,我们的方法在不同的基准上层相关分级中取得了新的最先进的业绩。