Neural Machine Translation (NMT) is an open vocabulary problem. As a result, dealing with the words not occurring during training (a.k.a. out-of-vocabulary (OOV) words) have long been a fundamental challenge for NMT systems. The predominant method to tackle this problem is Byte Pair Encoding (BPE) which splits words, including OOV words, into sub-word segments. BPE has achieved impressive results for a wide range of translation tasks in terms of automatic evaluation metrics. While it is often assumed that by using BPE, NMT systems are capable of handling OOV words, the effectiveness of BPE in translating OOV words has not been explicitly measured. In this paper, we study to what extent BPE is successful in translating OOV words at the word-level. We analyze the translation quality of OOV words based on word type, number of segments, cross-attention weights, and the frequency of segment n-grams in the training data. Our experiments show that while careful BPE settings seem to be fairly useful in translating OOV words across datasets, a considerable percentage of OOV words are translated incorrectly. Furthermore, we highlight the slightly higher effectiveness of BPE in translating OOV words for special cases, such as named-entities and when the languages involved are linguistically close to each other.
翻译:神经机翻译(NMT)是一个开放的词汇问题。 因此,处理培训期间没有出现的词汇(a.k.a.oblic-outballary (OOV) 字)长期以来一直是NMT系统的一项根本挑战。 解决这一问题的主要方法是Byte Pair Encoding (BPE), 将字词(包括OOOV字)分为子词部分。 BPE 在自动评价度量的范围广泛的翻译任务中取得了令人印象深刻的结果。 虽然人们常常认为, 使用BPE, NMT 系统能够处理OOOV字, 但BPE 翻译OOV字的效果一直没有得到明确衡量。 在本文中, 我们研究BPE在翻译OV字方面在多大程度上成功。 我们根据字类型、 区块数量、 交叉使用权重以及培训数据中分级 ngram 的频率分析OVD字的翻译质量。 我们的实验表明, 谨慎的 BPE 设置似乎在翻译OV 字跨数据组的翻译中, 相当有效, 每一个语言翻译的字段段次, 也都略地显示。