We introduce GEM, a living benchmark for natural language Generation (NLG), its Evaluation, and Metrics. Measuring progress in NLG relies on a constantly evolving ecosystem of automated metrics, datasets, and human evaluation standards. However, due to this moving target, new models often still evaluate on divergent anglo-centric corpora with well-established, but flawed, metrics. This disconnect makes it challenging to identify the limitations of current models and opportunities for progress. Addressing this limitation, GEM provides an environment in which models can easily be applied to a wide set of corpora and evaluation strategies can be tested. Regular updates to the benchmark will help NLG research become more multilingual and evolve the challenge alongside models. This paper serves as the description of the initial release for which we are organizing a shared task at our ACL 2021 Workshop and to which we invite the entire NLG community to participate.
翻译:我们引入了GEM,这是自然语言生成、评估和计量的活基准。测量NLG的进展依赖于不断演变的自动计量、数据集和人类评价标准等生态系统。然而,由于这一移动目标,新的模型往往仍然对不同的以地球为中心的生物群落进行评价,并有完善但有缺陷的衡量标准。这种脱节使得查明当前模式的局限性和取得进展的机会成为挑战。解决这一局限性,GEM提供了一个环境,使模型能够很容易地应用于广泛的公司群,评估战略可以测试。定期更新基准将有助于NLG研究更加多语言化,并随着模型一起演变挑战。本文描述了我们组织2021年ACL讲习班共同任务的初步发布情况,我们邀请整个NLG社区参加。