Cross-lingual word embeddings (CLWE) have been proven useful in many cross-lingual tasks. However, most existing approaches to learn CLWE including the ones with contextual embeddings are sense agnostic. In this work, we propose a novel framework to align contextual embeddings at the sense level by leveraging cross-lingual signal from bilingual dictionaries only. We operationalize our framework by first proposing a novel sense-aware cross entropy loss to model word senses explicitly. The monolingual ELMo and BERT models pretrained with our sense-aware cross entropy loss demonstrate significant performance improvement for word sense disambiguation tasks. We then propose a sense alignment objective on top of the sense-aware cross entropy loss for cross-lingual model pretraining, and pretrain cross-lingual models for several language pairs (English to German/Spanish/Japanese/Chinese). Compared with the best baseline results, our cross-lingual models achieve 0.52%, 2.09% and 1.29% average performance improvements on zero-shot cross-lingual NER, sentiment classification and XNLI tasks, respectively.
翻译:跨语言嵌入式(CLWE)在许多跨语言任务中被证明是有用的。然而,大多数现有的学习CLWE(包括有背景嵌入式的嵌入式)的方法都是不可知感的。在这项工作中,我们提出了一个新颖的框架,通过仅利用双语词典的跨语言信号,将感知嵌入感化感化水平与跨语言感官明确联系起来;单语言的ELMO和BERT模型先于我们的感觉跨语言损失,先于我们的感知性能交叉嵌入式失败模式,显示出对字感错乱任务的重大性能改进。 我们随后在跨语言模式培训的感觉交叉嵌入式损失和几种语言配对(英语至德语/英语/日语/日语/中文)的彩度跨语言模型之上提出了一个感应变目标。 与最佳基线结果相比,我们的跨语言模型分别实现了0.52 %、2.09%和1.29%的平均性能改进,分别是零口语净、情绪分类和XNLII任务。