The cornerstone of multilingual neural translation is shared representations across languages. Given the theoretically infinite representation power of neural networks, semantically identical sentences are likely represented differently. While representing sentences in the continuous latent space ensures expressiveness, it introduces the risk of capturing of irrelevant features which hinders the learning of a common representation. In this work, we discretize the encoder output latent space of multilingual models by assigning encoder states to entries in a codebook, which in effect represents source sentences in a new artificial language. This discretization process not only offers a new way to interpret the otherwise black-box model representations, but, more importantly, gives potential for increasing robustness in unseen testing conditions. We validate our approach on large-scale experiments with realistic data volumes and domains. When tested in zero-shot conditions, our approach is competitive with two strong alternatives from the literature. We also use the learned artificial language to analyze model behavior, and discover that using a similar bridge language increases knowledge-sharing among the remaining languages.
翻译:多语言神经翻译的基石是多种语言的共同表达形式。鉴于神经网络理论上无限的表达力,语义上相同的句子可能代表不同。在连续潜伏空间中代表的句子可以确保表达性,但它带来了捕捉阻碍学习共同表达的不相干特征的风险。在这项工作中,我们通过在代码簿中指定编码器状态,将多语言模型的编码器输出潜在空间分解为条目,这实际上是一种新的人工语言的源句子。这个分解过程不仅为解释其他黑盒模式演示提供了一种新的方式,而且更重要的是,为增加隐蔽测试条件的稳健度提供了潜力。我们用现实的数据量和域验证了我们大规模实验的方法。在零发试验时,我们的方法与文献中两种强有力的替代方法具有竞争力。我们还利用学习的人工语言分析模型行为,发现使用类似的桥语言可以增加其余语言之间的知识共享。