Since the first end-to-end neural coreference resolution model was introduced, many extensions to the model have been proposed, ranging from using higher-order inference to directly optimizing evaluation metrics using reinforcement learning. Despite improving the coreference resolution performance by a large margin, these extensions add a lot of extra complexity to the original model. Motivated by this observation and the recent advances in pre-trained Transformer language models, we propose a simple yet effective baseline for coreference resolution. Our model is a simplified version of the original neural coreference resolution model, however, it achieves impressive performance, outperforming all recent extended works on the public English OntoNotes benchmark. Our work provides evidence for the necessity of carefully justifying the complexity of existing or newly proposed models, as introducing a conceptual or practical simplification to an existing model can still yield competitive results.
翻译:自采用第一个端到端神经共同参照解决模式以来,提出了许多扩展模式的建议,从使用较高顺序的推理到直接优化使用强化学习的评价衡量标准。尽管大大改进了共同参照分辨率的性能,但这些扩展使原始模型增加了许多额外的复杂性。由于这一观察以及培训前变异语言模型的最新进展,我们提出了一个简单而有效的共同参照解决基准。我们的模型是原始神经共同参照解决模式的简化版本,但它取得了令人印象深刻的业绩,超过了最近对公开英文Onto Notes基准进行的所有扩展工作。我们的工作证明有必要仔细说明现有或新提出的模型的复杂性,因为对现有模型引入概念或实际简化仍然能够产生竞争性结果。