Current end-to-end approaches to Spoken Language Translation (SLT) rely on limited training resources, especially for multilingual settings. On the other hand, Multilingual Neural Machine Translation (MultiNMT) approaches rely on higher-quality and more massive data sets. Our proposed method extends a MultiNMT architecture based on language-specific encoders-decoders to the task of Multilingual SLT (MultiSLT). Our method entirely eliminates the dependency from MultiSLT data and it is able to translate while training only on ASR and MultiNMT data. Our experiments on four different languages show that coupling the speech encoder to the MultiNMT architecture produces similar quality translations compared to a bilingual baseline ($\pm 0.2$ BLEU) while effectively allowing for zero-shot MultiSLT. Additionally, we propose using an Adapter module for coupling the speech inputs. This Adapter module produces consistent improvements up to +6 BLEU points on the proposed architecture and +1 BLEU point on the end-to-end baseline.
翻译:目前口语翻译的端到端方法依赖于有限的培训资源,特别是多语种环境。另一方面,多语种神经机翻译(多语种机翻译)方法依赖于质量更高、规模更大的数据集。我们提议的方法将基于语言特定编码器-解码器的多语种NMT结构扩展至多语种 SLT(多语种解码器)任务。我们的方法完全消除了多语种语言翻译数据的依赖性,它能够翻译,而只进行有关ASR和多语种翻译数据的培训。我们对四种不同语言的实验表明,与双语基线(0.2美元)相比,将语音编码器连接到多语种NMT结构(多语种翻译)会产生类似的质量翻译。此外,我们提议使用适应器模块将语音投入合并。这个适应器模块持续改进到拟议架构的+6 BLEU点和终端至终端基线的+1 BLEU点。