Most, if not all forms of ellipsis (e.g., so does Mary) are similar to reading comprehension questions (what does Mary do), in that in order to resolve them, we need to identify an appropriate text span in the preceding discourse. Following this observation, we present an alternative approach for English ellipsis resolution relying on architectures developed for question answering (QA). We present both single-task models, and joint models trained on auxiliary QA and coreference resolution datasets, clearly outperforming the current state of the art for Sluice Ellipsis (from 70.00 to 86.01 F1) and Verb Phrase Ellipsis (from 72.89 to 78.66 F1).
翻译:多数情况下,如果不是所有形式的省略号(例如,玛丽也是那样)都类似于阅读理解问题(玛丽做了什么),因此,为了解决这些问题,我们需要在先前的讨论中找到一个适当的文本,根据这一意见,我们提出了一种替代方法,用于英国省略号解决办法,依靠为回答问题而开发的架构(QA),我们提出了单一任务号模型,以及经培训的关于辅助质量A和共同分辨率数据集的联合模型,明显优于Sluice Ellipsis(从70.00到86.01 F1)和Verb Phrase Ellipsis(从72.89到78.66 F1)。