We provide a new multi-task benchmark for evaluating text-to-image models. We perform a human evaluation comparing the most common open-source (Stable Diffusion) and commercial (DALL-E 2) models. Twenty computer science AI graduate students evaluated the two models, on three tasks, at three difficulty levels, across ten prompts each, providing 3,600 ratings. Text-to-image generation has seen rapid progress to the point that many recent models have demonstrated their ability to create realistic high-resolution images for various prompts. However, current text-to-image methods and the broader body of research in vision-language understanding still struggle with intricate text prompts that contain many objects with multiple attributes and relationships. We introduce a new text-to-image benchmark that contains a suite of thirty-two tasks over multiple applications that capture a model's ability to handle different features of a text prompt. For example, asking a model to generate a varying number of the same object to measure its ability to count or providing a text prompt with several objects that each have a different attribute to identify its ability to match objects and attributes correctly. Rather than subjectively evaluating text-to-image results on a set of prompts, our new multi-task benchmark consists of challenge tasks at three difficulty levels (easy, medium, and hard) and human ratings for each generated image.
翻译:我们为评价文本到图像模型提供了一个新的多任务基准。我们进行了一项人类评估,比较了最常用的开放源码(稳定传播)和商业(DALL-E2)模型。20名计算机科学AI研究生在三个困难级别对两种模型进行了三项任务的评价,每三个困难级别有10个快速级别,提供3,600个评分。文本到图像制作工作取得了迅速的进展,以至于许多最近的模型显示它们有能力为各种提示制作现实的高分辨率图像。然而,目前文本到图像的方法和更广泛的视觉语言理解研究体系仍然与包含多个属性和关系的许多对象的复杂文本提示进行斗争。我们引入了一个新的文本到图像基准,其中包括一套三十二项任务,涉及多个应用程序,从而抓住模型处理文本提示不同特征的能力,提供了3,600个评分。例如,要求一个模型产生不同数目的相同对象,以衡量其计算或提供文本快速度的能力,其中几个对象具有不同属性,以确定其正确匹配对象和属性的能力。我们每个对象和属性包含多个属性和关系的复杂对象和属性的能力,而不是主观地评估三个难度的中期评级。