Lately, propelled by the phenomenal advances around the transformer architecture, the legal NLP field has enjoyed spectacular growth. To measure progress, well curated and challenging benchmarks are crucial. However, most benchmarks are English only and in legal NLP specifically there is no multilingual benchmark available yet. Additionally, many benchmarks are saturated, with the best models clearly outperforming the best humans and achieving near perfect scores. We survey the legal NLP literature and select 11 datasets covering 24 languages, creating LEXTREME. To provide a fair comparison, we propose two aggregate scores, one based on the datasets and one on the languages. The best baseline (XLM-R large) achieves both a dataset aggregate score a language aggregate score of 61.3. This indicates that LEXTREME is still very challenging and leaves ample room for improvement. To make it easy for researchers and practitioners to use, we release LEXTREME on huggingface together with all the code required to evaluate models and a public Weights and Biases project with all the runs.
翻译:最近,在变压器结构的惊人进展推动下,合法的NLP领域取得了惊人的增长。为了衡量进展,精心制定和具有挑战性的基准至关重要。然而,大多数基准仅是英文,而在合法的NLP中,具体来说还没有多语种的基准。此外,许多基准是饱和的,最好的模型明显优于最优秀的人,并取得了接近完美的分数。我们调查了合法的NLP文献并选择了涵盖24种语言的11个数据集,创建了LEXTREME。为了提供一个公平的比较,我们建议了两个总分,一个基于数据集,一个基于语言。最好的基准(XLM-R large) 达到数据集总得分61.3。这表明LEXTREME仍然非常富有挑战性,为研究人员和从业者提供了充分的改进空间。为了方便使用,我们把LEXTREME与所有评估模型以及公共Wights和Biases项目所需的代码放在一起。