Simultaneous machine translation (SiMT) is usually done via sequence-level knowledge distillation (Seq-KD) from a full-sentence neural machine translation (NMT) model. However, there is still a significant performance gap between NMT and SiMT. In this work, we propose to leverage monolingual data to improve SiMT, which trains a SiMT student on the combination of bilingual data and external monolingual data distilled by Seq-KD. Preliminary experiments on En-Zh and En-Ja news domain corpora demonstrate that monolingual data can significantly improve translation quality (e.g., +3.15 BLEU on En-Zh). Inspired by the behavior of human simultaneous interpreters, we propose a novel monolingual sampling strategy for SiMT, considering both chunk length and monotonicity. Experimental results show that our sampling strategy consistently outperforms the random sampling strategy (and other conventional typical NMT monolingual sampling strategies) by avoiding the key problem of SiMT -- hallucination, and has better scalability. We achieve +0.72 BLEU improvements on average against random sampling on En-Zh and En-Ja. Data and codes can be found at https://github.com/hexuandeng/Mono4SiMT.
翻译:在这项工作中,我们提议利用单一语言数据来改进SimMT, 将双语数据和Seq-KD的外部单一语言数据相结合,对Sime-KD的学生进行双语数据和外部单一语言数据分离的培训。En-Zh和En-Ja新闻域域的初步实验表明,单语数据可以大大改善翻译质量(例如,En-Zh的+3.15 BLEU)。在人类同声传译员行为的启发下,我们提议为Simot采用新的单一语言抽样战略,既考虑到块长又考虑到单调。实验结果显示,我们的取样战略通过避免Sime-Zh和En-Ja新闻域域域域域域域(Corbora)的关键问题 -- -- 幻觉觉,并具有更好的可调度性。我们在Sima-Zu的数据和Me-MT/Enn-Z的随机取样中平均实现了+0.72 BLEU值的改进。