Translation Quality Estimation is critical to reducing post-editing efforts in machine translation and to cross-lingual corpus cleaning. As a research problem, quality estimation (QE) aims to directly estimate the quality of translation in a given pair of source and target sentences, and highlight the words that need corrections, without referencing to golden translations. In this paper, we propose Verdi, a novel framework for word-level and sentence-level post-editing effort estimation for bilingual corpora. Verdi adopts two word predictors to enable diverse features to be extracted from a pair of sentences for subsequent quality estimation, including a transformer-based neural machine translation (NMT) model and a pre-trained cross-lingual language model (XLM). We exploit the symmetric nature of bilingual corpora and apply model-level dual learning in the NMT predictor, which handles a primal task and a dual task simultaneously with weight sharing, leading to stronger context prediction ability than single-direction NMT models. By taking advantage of the dual learning scheme, we further design a novel feature to directly encode the translated target information without relying on the source context. Extensive experiments conducted on WMT20 QE tasks demonstrate that our method beats the winner of the competition and outperforms other baseline methods by a great margin. We further use the sentence-level scores provided by Verdi to clean a parallel corpus and observe benefits on both model performance and training efficiency.
翻译:质量估算(QE)是一个研究问题,其目的在于直接估计某对源和目标句子的翻译质量,并突出需要纠正的字词,而不必参考黄金翻译。在本文中,我们提议Verdi为双语公司提供字级和句级编辑后工作估算的新框架。Verdi采用两个字预测器,以便能够从一对判决中提取不同功能,用于随后的质量估算,包括基于变压器的神经机翻译模型和预先培训的跨语言语言模型(XLM),目的是直接估计特定源词和目标翻译的质量,并突出需要改正的词句,而无需参考黄金翻译。我们建议Verdi为双语公司提供字级和句级编辑后编辑工作量估算的新框架。Verdi采用两个词预测器,使模型预测能力比单一指令模型更强。我们利用双轨学习计划,进一步设计一个新功能,直接将翻译的目标信息编码在20级神经机器翻译水平上,而无需依赖竞争常量级标准标准,我们又通过在线测试标准方法,进一步展示了在线测试。