Neural machine translation (NMT) is sensitive to domain shift. In this paper, we address this problem in an active learning setting where we can spend a given budget on translating in-domain data, and gradually fine-tune a pre-trained out-of-domain NMT model on the newly translated data. Existing active learning methods for NMT usually select sentences based on uncertainty scores, but these methods require costly translation of full sentences even when only one or two key phrases within the sentence are informative. To address this limitation, we re-examine previous work from the phrase-based machine translation (PBMT) era that selected not full sentences, but rather individual phrases. However, while incorporating these phrases into PBMT systems was relatively simple, it is less trivial for NMT systems, which need to be trained on full sequences to capture larger structural properties of sentences unique to the new domain. To overcome these hurdles, we propose to select both full sentences and individual phrases from unlabelled data in the new domain for routing to human translators. In a German-English translation task, our active learning approach achieves consistent improvements over uncertainty-based sentence selection methods, improving up to 1.2 BLEU score over strong active learning baselines.
翻译:神经机器翻译( NMT) 对域变敏感 。 在本文中, 我们在一个积极的学习环境中解决这个问题, 我们可以在其中花费一定的预算来翻译域内数据, 并在新翻译的数据上逐步微调一个经过事先训练的NMT模型。 NMT的现有积极学习方法通常根据不确定分数选择句子, 但是这些方法需要昂贵的完整句子翻译, 即使该句子中只有一两个关键词句是信息化的。 为解决这一限制, 我们重新审视以前从基于词组的机器翻译( PBMT) 时代( PBMT) 中选择了非完整句子, 而不是单个词组。 但是, 在将这些词组纳入 PBMT 系统中相对简单, 但对于NMT 系统来说并不那么小, 需要全序培训来捕捉新领域独有的句子的较大结构属性。 为了克服这些障碍, 我们建议从新域中未加标记的数据中选择完整句子和个别词句子, 用于人类翻译。 在一项德国- 英文翻译任务中, 我们的积极学习方法在基于不确定性的基线选择方法上取得一致的改进。