Recent advances in cross-lingual word embeddings have primarily relied on mapping-based methods, which project pretrained word embeddings from different languages into a shared space through a linear transformation. However, these approaches assume word embedding spaces are isomorphic between different languages, which has been shown not to hold in practice (S{\o}gaard et al., 2018), and fundamentally limits their performance. This motivates investigating joint learning methods which can overcome this impediment, by simultaneously learning embeddings across languages via a cross-lingual term in the training objective. We propose a bilingual extension of the CBOW method which leverages sentence-aligned corpora to obtain robust cross-lingual word and sentence representations. Our approach significantly improves cross-lingual sentence retrieval performance over all other approaches while maintaining parity with the current state-of-the-art methods on word-translation. It also achieves parity with a deep RNN method on a zero-shot cross-lingual document classification task, requiring far fewer computational resources for training and inference. As an additional advantage, our bilingual method leads to a much more pronounced improvement in the the quality of monolingual word vectors compared to other competing methods.
翻译:最近跨语言字嵌入方面的进展主要依靠基于绘图的方法,这种方法预测通过线性转换将不同语言预先训练的字嵌入到一个共享空间;然而,这些方法假定,将字嵌入空间在不同语言之间是无形态的,这在实践上证明无法维持(S@o}gaard等人,2018年),从根本上限制了它们的绩效。这促使通过在培训目标中用跨语言术语同时学习跨语言嵌入来克服这一障碍的联合学习方法。我们提议,CBOW方法的双语扩展可以利用与判刑一致的CBOW方法获得强有力的跨语言词和句式表述。我们的方法大大改进了所有其他方法的跨语言句检索性能,同时保持了与目前最先进的文字翻译方法的同等性。它还在零点跨语言文件分类任务上实现了与深度RNN方法的等同,需要更少的计算资源进行培训和推断。作为额外优势,我们的双语方法使得单语矢量的质量比其他方法更加明显改进。