Bilingual Word Embeddings (BWEs) are one of the cornerstones of cross-lingual transfer of NLP models. They can be built using only monolingual corpora without supervision leading to numerous works focusing on unsupervised BWEs. However, most of the current approaches to build unsupervised BWEs do not compare their results with methods based on easy-to-access cross-lingual signals. In this paper, we argue that such signals should always be considered when developing unsupervised BWE methods. The two approaches we find most effective are: 1) using identical words as seed lexicons (which unsupervised approaches incorrectly assume are not available for orthographically distinct language pairs) and 2) combining such lexicons with pairs extracted by matching romanized versions of words with an edit distance threshold. We experiment on thirteen non-Latin languages (and English) and show that such cheap signals work well and that they outperform using more complex unsupervised methods on distant language pairs such as Chinese, Japanese, Kannada, Tamil, and Thai. In addition, they are even competitive with the use of high-quality lexicons in supervised approaches. Our results show that these training signals should not be neglected when building BWEs, even for distant languages.
翻译:双语双词嵌入器(BWES)是跨语言传输NLP模式的基石之一。 它们只能使用单一语言的社团,而无需监督,只能使用单一语言的社团来建造。 但是,目前建立不受监督的BWES的大多数方法并不把其结果与基于容易获得的跨语言信号的方法进行比较。 在本文中,我们认为,在制定不受监督的BWE方法时,这些信号总是应该被考虑。 我们认为最有效的两种方法是:1) 使用相同的词作为种子词库(未经监督的方法错误地假定对正异语言配对不可用)和2) 使用相同的词库,这些词库与通过匹配罗马化的词汇和编辑距离阈值所提取的配对相结合。我们试验了13种非拉丁语言(和英语),并表明这些廉价信号效果很好,在中国、日语、卡纳达语、泰米尔语和泰语等遥远语言配对使用更为复杂且不统一的方法。 此外,它们甚至具有竞争力,在我们的远方语言中,当我们没有受到监督的情况下,使用高质量的BWECOLCSB结果时,它们应该显示我们的远端方法。