In this work, we present some recommendations on the evaluation of state-of-the-art generative models for constrained generation tasks. The progress on generative models has been rapid in recent years. These large-scale models have had three impacts: firstly, the fluency of generation in both language and vision modalities has rendered common average-case evaluation metrics much less useful in diagnosing system errors. Secondly, the same substrate models now form the basis of a number of applications, driven both by the utility of their representations as well as phenomena such as in-context learning, which raise the abstraction level of interacting with such models. Thirdly, the user expectations around these models and their feted public releases have made the technical challenge of out of domain generalization much less excusable in practice. Subsequently, our evaluation methodologies haven't adapted to these changes. More concretely, while the associated utility and methods of interacting with generative models have expanded, a similar expansion has not been observed in their evaluation practices. In this paper, we argue that the scale of generative models could be exploited to raise the abstraction level at which evaluation itself is conducted and provide recommendations for the same. Our recommendations are based on leveraging specifications as a powerful instrument to evaluate generation quality and are readily applicable to a variety of tasks.
翻译:在这项工作中,我们提出了一些关于评估受限制的生成任务的最新基因化模型的建议。近年来,在基因化模型方面进展迅速。这些大规模模型产生了三个影响:第一,语言和愿景模式的生成流利性使得共同平均情况评价指标在诊断系统错误方面用处小得多。第二,同样的基底模型现在成为一些应用的基础,其驱动因素是其表现的效用以及诸如文中学习等现象,这些现象提高了与这些模型互动的抽象程度。第三,这些模型的用户期望及其预发的公开发布使得在域外普遍化方面的技术挑战在实践中少得多。随后,我们的评价方法没有适应这些变化。更具体地说,虽然与基因化模型互动的效用和方法有所扩大,但其评价做法却没有观察到类似的扩展。在本文件中,我们认为,可以利用基因化模型的规模来提高与这些模型进行互动的抽象程度,在这些模型上,用户的期望以及这些模型的预发公开发布,使得在实践上不那么容易被排除。随后,我们的评价方法没有适应这些变化。更具体地说,虽然与基因化模型相关的效用和方法在扩大,但在它们的评价做法上,但我们的建议是在利用一个强大的工具,以可适用于生成的规格。