Building ASR models across many languages is a challenging multi-task learning problem due to large variations and heavily unbalanced data. Existing work has shown positive transfer from high resource to low resource languages. However, degradations on high resource languages are commonly observed due to interference from the heterogeneous multilingual data and reduction in per-language capacity. We conduct a capacity study on a 15-language task, with the amount of data per language varying from 7.6K to 53.5K hours. We adopt GShard [1] to efficiently scale up to 10B parameters. Empirically, we find that (1) scaling the number of model parameters is an effective way to solve the capacity bottleneck - our 500M-param model already outperforms monolingual baselines and scaling it to 1B and 10B brought further quality gains; (2) larger models are not only more data efficient, but also more efficient in terms of training cost as measured in TPU days - the 1B-param model reaches the same accuracy at 34% of training time as the 500M-param model; (3) given a fixed capacity budget, adding depth works better than width and large encoders do better than large decoders; (4) with continuous training, they can be adapted to new languages and domains.
翻译:由于差异巨大和数据严重不平衡,在多种语言中建立ASR模型是一个具有挑战性的多任务学习问题。现有工作显示,从高资源语言向低资源语言的转移是积极的。然而,由于多语言数据的干扰和每个语言能力的减少,高资源语言的退化通常会观察到。我们对15种语言的任务进行能力研究,每个语言的数据数量从7.6K小时到53.5K小时不等。我们采用GShard [1] 来有效地推广到10B参数。我们很生动地发现:(1) 扩大模型参数的数量是解决能力瓶颈的有效方法――我们的500M参数模型已经超越了单语基线,并扩大到1B和10B,从而取得了进一步的质量收益。 (2) 更大的模型不仅提高了数据效率,而且提高了按TPU日计算的培训成本的效率,1B参数模型在培训时间的34%达到与500M参数模型相同的准确度。(3) 固定的能力预算认为,增加深度大于宽度和大解码器的深度工程比大型解码器更好。(4) 持续培训可以改进新的区域。