Document-level MT models are still far from satisfactory. Existing work extend translation unit from single sentence to multiple sentences. However, study shows that when we further enlarge the translation unit to a whole document, supervised training of Transformer can fail. In this paper, we find such failure is not caused by overfitting, but by sticking around local minima during training. Our analysis shows that the increased complexity of target-to-source attention is a reason for the failure. As a solution, we propose G-Transformer, introducing locality assumption as an inductive bias into Transformer, reducing the hypothesis space of the attention from target to source. Experiments show that G-Transformer converges faster and more stably than Transformer, achieving new state-of-the-art BLEU scores for both non-pretraining and pre-training settings on three benchmark datasets.
翻译:文件层面的MT模型仍然远远不能令人满意。 现有的工作将翻译单位从单句扩大到多个句子。 但是,研究表明,当我们进一步将翻译单位扩大到整个文件时,对变异器的监督培训可能失败。 在本文中,我们发现这种失败不是由于超装造成的,而是在培训期间停留在本地迷你地带。我们的分析表明,目标对源的注意力日益复杂是失败的原因之一。作为一个解决方案,我们提议G-Transer(Terrafer)将地点假设作为变异器的一种诱导偏差,从而将目标关注的假设空间从目标转向源。 实验表明,G- Transer(G-Transer)比变异器更快、更刺切,在三个基准数据集的未准备和训练前环境都达到了新的最先进的BLEU分数。