Language-agnostic sentence embeddings generated by pre-trained models such as LASER and LaBSE are attractive options for mining large datasets to produce parallel corpora for low-resource machine translation. We test LASER and LaBSE in extracting bitext for two related low-resource African languages: Luhya and Swahili. For this work, we created a new parallel set of nearly 8000 Luhya-English sentences which allows a new zero-shot test of LASER and LaBSE. We find that LaBSE significantly outperforms LASER on both languages. Both LASER and LaBSE however perform poorly at zero-shot alignment on Luhya, achieving just 1.5% and 22.0% successful alignments respectively (P@1 score). We fine-tune the embeddings on a small set of parallel Luhya sentences and show significant gains, improving the LaBSE alignment accuracy to 53.3%. Further, restricting the dataset to sentence embedding pairs with cosine similarity above 0.7 yielded alignments with over 85% accuracy.
翻译:LASER 和 LaBSE 等培训前模型生成的语言不可知句嵌入,对于挖掘大型数据集以产生用于低资源机器翻译的平行公司来说,具有吸引力的选择。 我们测试LASER 和 LaBSE 用于提取两个相关的低资源非洲语言: Luhya 和 Swahili 的比特文本。 对于这项工作,我们创建了一套新的平行的近8000 Luhya- Engli 句,允许对 LASER 和 LaBSE 进行新的零弹测试。 我们发现, LaBSE 在两种语言上都明显优于LASER 。 然而, Luhya 的零弹射校准率差,分别只达到1.5% 和 22.0% 的成功校准率(P@1分 ) 。 我们微调嵌入一小组平行的LUhya 句, 并显示出显著的收益, 将LABSE 校准率提高到53.3% 。 此外,我们发现LABSE 将数据集限制在与 0. 7 以上 COS 相近于 的对配的句上, 超过 85% 。