Low-resource languages, such as Baltic languages, benefit from Large Multilingual Models (LMs) that possess remarkable cross-lingual transfer performance capabilities. This work is an interpretation and analysis study into cross-lingual representations of Multilingual LMs. Previous works hypothesized that these LMs internally project representations of different languages into a shared cross-lingual space. However, the literature produced contradictory results. In this paper, we revisit the prior work claiming that "BERT is not an Interlingua" and show that different languages do converge to a shared space in such language models with another choice of pooling strategy or similarity index. Then, we perform cross-lingual representational analysis for the two most popular multilingual LMs employing 378 pairwise language comparisons. We discover that while most languages share joint cross-lingual space, some do not. However, we observe that Baltic languages do belong to that shared space.
翻译:波罗的海语言等低资源语言受益于具有卓越的跨语言转让性能的大型多语言模型(LMs),这项工作是对多语言LM的跨语言代表性的口译和分析研究。以前的工作假设,这些LMs内部项目将不同语言的表示纳入一个共同的跨语言空间。然而,这些文献产生了相互矛盾的结果。在本文件中,我们重新审视了先前的工作,声称“BERT不是跨语言者”,并表明不同语言确实会与这种语言模型中的共享空间相融合,并采用另一种选择的集合战略或类似指数。然后,我们用378对对对口语言比较,对口语言对口语言对口语言对口语言对口分析两种最受欢迎的语言语言语言语言语言语言对口。我们发现,虽然大多数语言共享共同的跨语言空间,但有些不同。但是,我们观察到波罗的海语言属于这一共享空间。