One of the most popular methods for context-aware machine translation (MT) is to use separate encoders for the source sentence and context as multiple sources for one target sentence. Recent work has cast doubt on whether these models actually learn useful signals from the context or are improvements in automatic evaluation metrics just a side-effect. We show that multi-source transformer models improve MT over standard transformer-base models even with empty lines provided as context, but the translation quality improves significantly (1.51 - 2.65 BLEU) when a sufficient amount of correct context is provided. We also show that even though randomly shuffling in-domain context can also improve over baselines, the correct context further improves translation quality and random out-of-domain context further degrades it.
翻译:最受欢迎的背景觉悟机器翻译(MT)方法之一是对源句和上下文使用单独的编码器,作为一个目标句的多重来源;最近的工作使人们怀疑这些模型是否实际上从上下文中吸取了有用的信号,还是只是自动评价指标的改进了一个副作用;我们表明,多源变压器模型即使用上下文提供的空线,也比标准变压器基数模型改进了MT,但是,如果提供了足够数量的正确上下文,翻译质量也显著提高(1.51-2.65 BLEU)。我们还表明,即使随机地在主体中打乱打乱也可以超过基线,正确的上下文会进一步提高翻译质量,随机地外向外环境会进一步降低翻译质量。