Due to the lack of parallel data in current Grammatical Error Correction (GEC) task, models based on Sequence to Sequence framework cannot be adequately trained to obtain higher performance. We propose two data synthesis methods which can control the error rate and the ratio of error types on synthetic data. The first approach is to corrupt each word in the monolingual corpus with a fixed probability, including replacement, insertion and deletion. Another approach is to train error generation models and further filtering the decoding results of the models. The experiments on different synthetic data show that the error rate is 40% and the ratio of error types is the same can improve the model performance better. Finally, we synthesize about 100 million data and achieve comparable performance as the state of the art, which uses twice as much data as we use.
翻译:由于缺乏当前典型错误校正(GEC)任务中的平行数据,基于序列到序列框架的模型无法得到充分培训,以获得更高的性能。我们提出了两种数据合成方法,可以控制错率和合成数据误差类型的比例。第一种方法是将单语拼写中的每个单语词以固定的概率腐蚀,包括替换、插入和删除。另一种方法是培训错误生成模型,进一步过滤模型的解码结果。不同合成数据的实验表明,误差率为40%,差错类型的比例同样可以改善模型的性能。最后,我们合成了大约1亿个数据,并取得了与最新数据相比的可比较性能,最新数据使用率是我们使用的两倍。