In this paper we share findings from our effort to build practical machine translation (MT) systems capable of translating across over one thousand languages. We describe results in three research domains: (i) Building clean, web-mined datasets for 1500+ languages by leveraging semi-supervised pre-training for language identification and developing data-driven filtering techniques; (ii) Developing practical MT models for under-served languages by leveraging massively multilingual models trained with supervised parallel data for over 100 high-resource languages and monolingual datasets for an additional 1000+ languages; and (iii) Studying the limitations of evaluation metrics for these languages and conducting qualitative analysis of the outputs from our MT models, highlighting several frequent error modes of these types of models. We hope that our work provides useful insights to practitioners working towards building MT systems for currently understudied languages, and highlights research directions that can complement the weaknesses of massively multilingual models in data-sparse settings.
翻译:在本文中,我们分享了我们努力建立实用机器翻译系统的结果,这些系统能够翻译超过一千种语言。我们描述了三个研究领域的成果:(一) 通过利用半监督的语文识别培训前培训和开发数据驱动过滤技术,为1 500+种语言建立清洁的网络定位数据集;(二) 利用经过100多种高资源语言监督平行数据培训的大规模多语种语言模型和另外1000+语言单一语言数据集,为服务不足语言开发实用的MT模式;(三) 研究这些语言的评价指标的局限性,并对我们的MT模型的产出进行定性分析,突出这些模式中常见的错误模式。 我们希望我们的工作能为致力于为目前研究不足的语言建立MT系统的从业人员提供有益的见解,并突出研究方向,以补充数据偏差环境中大规模多语种模型的弱点。