Domain Adaptation is widely used in practical applications of neural machine translation, which aims to achieve good performance on both the general-domain and in-domain. However, the existing methods for domain adaptation usually suffer from catastrophic forgetting, domain divergence, and model explosion. To address these three problems, we propose a method of "divide and conquer" which is based on the importance of neurons or parameters in the translation model. In our method, we first prune the model and only keep the important neurons or parameters, making them responsible for both general-domain and in-domain translation. Then we further train the pruned model supervised by the original unpruned model with the knowledge distillation method. Last we expand the model to the original size and fine-tune the added parameters for the in-domain translation. We conduct experiments on different languages and domains and the results show that our method can achieve significant improvements compared with several strong baselines.
翻译:神经机翻译的实际应用中广泛使用“ 网域适应”, 目的是在一般域和内域实现良好的性能。 但是, 现有的域适应方法通常会因灾难性的遗忘、 域差异和模型爆炸而受到影响。 为了解决这三个问题, 我们建议一种基于神经元或翻译模型参数重要性的“ 网状和征服” 方法。 在我们的方法中, 我们首先使用模型, 只保留重要的神经元或参数, 让他们同时负责普通域和内域翻译。 然后我们用知识蒸馏法进一步培训由原始未处理模型监督的经调整模型的模型。 最后我们将该模型扩大到原始大小, 并精细调整在内域翻译中添加的参数。 我们在不同语言和领域进行实验, 结果显示, 我们的方法可以取得显著改进, 与几个强大的基准相比 。