Generative AI models have impressive performance on many Natural Language Processing tasks such as language understanding, reasoning and language generation. One of the most important questions that is being asked by the AI community today is about the capabilities and limits of these models, and it is clear that evaluating generative AI is very challenging. Most studies on generative Large Language Models (LLMs) are restricted to English and it is unclear how capable these models are at understanding and generating other languages. We present the first comprehensive benchmarking of generative LLMs - MEGA, which evaluates models on standard NLP benchmarks, covering 8 diverse tasks and 33 typologically diverse languages. We also compare the performance of generative LLMs to State of the Art (SOTA) non-autoregressive models on these tasks to determine how well generative models perform compared to the previous generation of LLMs. We present a thorough analysis of the performance of models across languages and discuss some of the reasons why generative LLMs are currently not optimal for all languages. We create a framework for evaluating generative LLMs in the multilingual setting and provide directions for future progress in the field.
翻译:生成式AI模型在许多自然语言处理任务(如语言理解、推理和语言生成)上表现出色。AI社区今天最重要的问题之一是这些模型的能力和局限性,而评估生成式AI显然非常具有挑战性。大多数有关生成式大型语言模型(LLMs)的研究都局限于英语,目前尚不清楚这些模型理解和生成其他语言的能力如何。我们提出了第一个广泛评估生成式LLMs的综合基准--MEGA,该基准在标准NLP基准测试中对模型进行评估,涵盖8项多样化任务和33种语言类型的评估。我们还将生成式LLMs的性能与先前一代LLMs的最新技术(SOTA)非自回归模型进行比较,以确定生成式模型的性能如何。我们全面分析了模型在各种语言中的表现,并讨论了一些原因,即生成式LLMs目前并不适用于所有语言。我们创建了一个框架,以多语言设置评估生成式LLMs,并提供了未来进展方向的指导。