Previously, neural methods in grammatical error correction (GEC) did not reach state-of-the-art results compared to phrase-based statistical machine translation (SMT) baselines. We demonstrate parallels between neural GEC and low-resource neural MT and successfully adapt several methods from low-resource MT to neural GEC. We further establish guidelines for trustable results in neural GEC and propose a set of model-independent methods for neural GEC that can be easily applied in most GEC settings. Proposed methods include adding source-side noise, domain-adaptation techniques, a GEC-specific training-objective, transfer learning with monolingual data, and ensembling of independently trained GEC models and language models. The combined effects of these methods result in better than state-of-the-art neural GEC models that outperform previously best neural GEC systems by more than 10% M$^2$ on the CoNLL-2014 benchmark and 5.9% on the JFLEG test set. Non-neural state-of-the-art systems are outperformed by more than 2% on the CoNLL-2014 benchmark and by 4% on JFLEG.
翻译:与基于词基的统计机器翻译(SMT)基线相比,古典错误校正(GEC)的神经方法没有达到最先进的结果。我们展示了神经GEC和低资源神经MT之间的平行,并成功地将低资源MT的几种方法与神经GEC相适应。我们进一步为神经GEC的可信任结果制定了指导方针,并提出了一套可在大多数GEC环境中轻易应用的神经GEC模型独立方法。建议的方法包括增加源侧噪音、域适应技术、GEC特定培训目标、用单语数据传输学习以及集成独立培训的GEC模型和语言模型。这些方法的综合效果使得比最先进的神经EC模型的更好,该模型在CONLL-2014基准中比以前最好的神经EC系统高出10%以上,在JFLEG测试中比4.9%高。在CONLU-2014基准中,NLU-NLF基准比4%以上。