Due to their crucial role in all NLP, several benchmarks have been proposed to evaluate pretrained language models. In spite of these efforts, no public benchmark of diverse nature currently exists for evaluation of Arabic. This makes it challenging to measure progress for both Arabic and multilingual language models. This challenge is compounded by the fact that any benchmark targeting Arabic needs to take into account the fact that Arabic is not a single language but rather a collection of languages and varieties. In this work, we introduce ORCA, a publicly available benchmark for Arabic language understanding evaluation. ORCA is carefully constructed to cover diverse Arabic varieties and a wide range of challenging Arabic understanding tasks exploiting 60 different datasets across seven NLU task clusters. To measure current progress in Arabic NLU, we use ORCA to offer a comprehensive comparison between 18 multilingual and Arabic language models. We also provide a public leaderboard with a unified single-number evaluation metric (ORCA score) to facilitate future research.
翻译:由于阿拉伯语言方案在所有全国语言方案都发挥着关键作用,已提出若干基准来评价预先培训的语言模式,尽管作出了这些努力,但目前还没有关于阿拉伯语评价的不同性质的公共基准,因此难以衡量阿拉伯语和多语种语言模式的进展,而任何针对阿拉伯语的基准都需要考虑到阿拉伯语不是单一语言的事实,而是语言和品种的集合。在这项工作中,我们引入了可公开获得的阿拉伯语理解评价基准ORCA。ORCA经过仔细构建,以涵盖多种阿拉伯语品种和广泛的具有挑战性的阿拉伯语理解任务,利用七个国家语言方案任务组的60个不同的数据集。为了衡量阿拉伯语国家语言方案目前的进展,我们使用ORCA来提供18种多种语言和阿拉伯语模式的全面比较。我们还提供了一个具有统一单一数量评价标准的公共领导板(ORCA评分),以便利今后的研究。