English pretrained language models, which make up the backbone of many modern NLP systems, require huge amounts of unlabeled training data. These models are generally presented as being trained only on English text but have been found to transfer surprisingly well to other languages. We investigate this phenomenon and find that common English pretraining corpora actually contain significant amounts of non-English text: even when less than 1% of data is not English (well within the error rate of strong language classifiers), this leads to hundreds of millions of foreign language tokens in large-scale datasets. We then demonstrate that even these small percentages of non-English data facilitate cross-lingual transfer for models trained on them, with target language performance strongly correlated to the amount of in-language data seen during pretraining. In light of these findings, we argue that no model is truly monolingual when pretrained at scale, which should be considered when evaluating cross-lingual transfer.
翻译:构成许多现代国家语言方案系统主干基础的英语预先培训语言模型需要大量无标签的培训数据。这些模型通常只提供英文文本培训,但发现向其他语言的传输令人惊讶。我们调查了这一现象,发现普通英语预培训公司实际上含有大量非英文文本:即使不到1%的数据不是英语(在强大的语言分类人员误差率范围内),这导致在大规模数据集中出现数亿外语标牌。我们随后表明,即使这些非英语数据比例很小,也有利于以跨语方式转让经过培训的模型,目标语言性能与培训前看到的语言数据量密切相关。根据这些调查结果,我们认为,在评估跨语言转移时,没有一种模式真正是单一语言的。