Large-scale models for learning fixed-dimensional cross-lingual sentence representations like LASER (Artetxe and Schwenk, 2019b) lead to significant improvement in performance on downstream tasks. However, further increases and modifications based on such large-scale models are usually impractical due to memory limitations. In this work, we introduce a lightweight dual-transformer architecture with just 2 layers for generating memory-efficient cross-lingual sentence representations. We explore different training tasks and observe that current cross-lingual training tasks leave a lot to be desired for this shallow architecture. To ameliorate this, we propose a novel cross-lingual language model, which combines the existing single-word masked language model with the newly proposed cross-lingual token-level reconstruction task. We further augment the training task by the introduction of two computationally-lite sentence-level contrastive learning tasks to enhance the alignment of cross-lingual sentence representation space, which compensates for the learning bottleneck of the lightweight transformer for generative tasks. Our comparisons with competing models on cross-lingual sentence retrieval and multilingual document classification confirm the effectiveness of the newly proposed training tasks for a shallow model.
翻译:学习诸如LASER(Artetxe和Schwenk,2019b)等大型固定的跨语言句式的大规模学习模式,导致下游任务业绩的显著改善;然而,由于记忆限制,根据这种大规模模式进一步增加和修改通常不切实际;在这项工作中,我们采用了一个轻量的双重转换结构,只有两层,用于生成记忆效率高的跨语言句式表示法;我们探讨不同的培训任务,并观察到目前跨语言培训任务对于这一浅层结构来说有很大的希望;为了改善这一点,我们提议了一个新的跨语言语言模式,将现有的单词蒙面语言模式与新提议的跨语言象征性的重建任务结合起来;我们进一步增加了培训任务,采用了两种计算用字的句级对比学习任务,以加强跨语言句式代表空间的协调一致,从而弥补了用于基因化任务的轻质变器学习的瓶颈。我们与关于跨语言句式检索和多语言文件分类的相互竞争模式的比较,证实了新提议的浅面模式培训任务的有效性。