Recently, mT5 - a massively multilingual version of T5 - leveraged a unified text-to-text format to attain state-of-the-art results on a wide variety of multilingual NLP tasks. In this paper, we investigate the impact of incorporating parallel data into mT5 pre-training. We find that multi-tasking language modeling with objectives such as machine translation during pre-training is a straightforward way to improve performance on downstream multilingual and cross-lingual tasks. However, the gains start to diminish as the model capacity increases, suggesting that parallel data might not be as essential for larger models. At the same time, even at larger model sizes, we find that pre-training with parallel data still provides benefits in the limited labelled data regime.
翻译:最近,MT5——一个庞大的多语种T5版本——利用了统一的文本到文本格式,在多种多语种国家语言方案的任务中取得最新成果。我们在本文件中调查了将平行数据纳入MT5培训前培训前的MT5数据的影响。我们发现,在培训前采用机器翻译等目标的多任务语言模式,是提高下游多语种和跨语言任务绩效的一个直截了当的方法。然而,随着模型容量的增加,收益开始减少,表明平行数据可能不象大型模型那样重要。 与此同时,即使规模更大,我们发现,在使用平行数据进行培训之前,在有限的标签数据制度中仍然有好处。