The impressive capacity shown by recent text-to-image diffusion models to generate high-quality pictures from textual input prompts has leveraged the debate about the very definition of art. Nonetheless, these models have been trained using text data collected from content-based labelling protocols that focus on describing the items and actions in an image but neglect any subjective appraisal. Consequently, these automatic systems need rigorous descriptions of the elements and the pictorial style of the image to be generated, otherwise failing to deliver. As potential indicators of the actual artistic capabilities of current generative models, we characterise the sentimentality, objectiveness and degree of abstraction of publicly available text data used to train current text-to-image diffusion models. Considering the sharp difference observed between their language style and that typically employed in artistic contexts, we suggest generative models should incorporate additional sources of subjective information in their training in order to overcome (or at least to alleviate) some of their current limitations, thus effectively unleashing a truly artistic and creative generation.
翻译:近期的文字到图像传播模型展示了令人印象深刻的能力,通过文字输入提示生成高质量的图片,这些模型利用了有关艺术定义本身的辩论。然而,这些模型利用基于内容的标签协议收集的文本数据进行了培训,这些协议侧重于在图像中描述项目和行动,但忽视了任何主观的评价。因此,这些自动系统需要严格描述所生成图像的要素和图片风格,否则无法提供。作为当前发型模型实际艺术能力的潜在指标,我们描述了用于培训当前文字到图像传播模型的公开文本数据的感性、客观性和抽象程度。考虑到其语言风格和通常在艺术环境中使用的典型差异,我们建议基因化模型在其培训中增加主观信息的来源,以便克服(或至少减轻)其目前的一些局限性,从而有效地释放出真正艺术和创造性的一代。