Recent advancements in NLP have given us models like mBERT and XLMR that can serve over 100 languages. The languages that these models are evaluated on, however, are very few in number, and it is unlikely that evaluation datasets will cover all the languages that these models support. Potential solutions to the costly problem of dataset creation are to translate datasets to new languages or use template-filling based techniques for creation. This paper proposes an alternate solution for evaluating a model across languages which make use of the existing performance scores of the model on languages that a particular task has test sets for. We train a predictor on these performance scores and use this predictor to predict the model's performance in different evaluation settings. Our results show that our method is effective in filling the gaps in the evaluation for an existing set of languages, but might require additional improvements if we want it to generalize to unseen languages.
翻译:国家语言平台最近的进展为我们提供了能够服务100多种语言的模型,如 mBERT 和 XLMR 。 但是,这些模型所评估的语言数量很少,评价数据集也不太可能涵盖这些模型所支持的所有语言。 建立数据集这一昂贵问题的潜在解决办法是将数据集翻译成新语言或使用基于模板的创建技术。本文件提出了一种替代解决方案,用于评价一种使用特定任务所测试的语文模型现有性能分数的模型。 我们对这些成绩进行了预测,并使用这一预测器预测模型在不同评估环境中的绩效。我们的结果显示,我们的方法在填补现有一组语言评价中的空白方面是有效的,但如果我们想推广到看不见的语言,则可能需要进一步改进。