This work was conducted to find out how tokenization methods affect the training results of machine translation models. In this work, alphabet tokenization, morpheme tokenization, and BPE tokenization were applied to Korean as the source language and English as the target language respectively, and the comparison experiment was conducted by repeating 50,000 epochs of each 9 models using the Transformer neural network. As a result of measuring the BLEU scores of the experimental models, the model that applied BPE tokenization to Korean and morpheme tokenization to English recorded 35.73, showing the best performance.
翻译:这项工作旨在了解象征性化方法如何影响机器翻译模型的培训结果,在这项工作中,分别将字母象征性化、摩擦象征性化和BPE象征性化分别作为源语用于朝鲜语,将英语作为目标语言,比较实验则利用变换神经网络重复了9个模型中的每9个模型中的5万个时代,通过测量实验模型的BLEU分数,将BPE代号用于朝鲜语的模型和将质化作为英语的模型记录了35.73,表现最佳。