Recent advances in text-to-image diffusion models have enabled the generation of diverse and high-quality images. However, generated images often fall short of depicting subtle details and are susceptible to errors due to ambiguity in the input text. One way of alleviating these issues is to train diffusion models on class-labeled datasets. This comes with a downside, doing so limits their expressive power: (i) supervised datasets are generally small compared to large-scale scraped text-image datasets on which text-to-image models are trained, and so the quality and diversity of generated images are severely affected, or (ii) the input is a hard-coded label, as opposed to free-form text, which limits the control over the generated images. In this work, we propose a non-invasive fine-tuning technique that capitalizes on the expressive potential of free-form text while achieving high accuracy through discriminative signals from a pretrained classifier, which guides the generation. This is done by iteratively modifying the embedding of a single input token of a text-to-image diffusion model, using the classifier, by steering generated images toward a given target class. Our method is fast compared to prior fine-tuning methods and does not require a collection of in-class images or retraining of a noise-tolerant classifier. We evaluate our method extensively, showing that the generated images are: (i) more accurate and of higher quality than standard diffusion models, (ii) can be used to augment training data in a low-resource setting, and (iii) reveal information about the data used to train the guiding classifier. The code is available at \url{https://github.com/idansc/discriminative_class_tokens}
翻译:最近,文本到图像扩散模型的进展使得生成了多样化和高质量的图像成为可能。然而,生成的图像常常无法描述微妙的细节,因为输入文本存在的模糊性错误。缓解这些问题的方法之一是基于有类别标签的数据集来训练扩散模型。这也有一个缺点,训练集通常比大规模抓取的文本-图像数据集小得多,所以生成的图像的质量和多样性受到严重的影响;或者输入是硬编码的标签,而非自由形式的文本,这限制了对生成的图像的控制。在这项工作中,我们提出了一种非侵入性的微调技术,利用预训练的分类器的区分性信号来引导生成,从而发挥自由形式文本的表现力潜力,并实现高精度效果。这是通过迭代方式,使用分类器修改文本到图像扩散模型的单一输入标记的嵌入来实现的,从而将生成的图像引向给定的目标类。与之前的微调方法相比,我们的方法非常快,不需要一系列类内图像或噪声容错分类器的重新训练。我们进行了大量评估,表明生成的图像:(i)比标准扩散模型更精确且质量更高,(ii) 可用于增强低资源设置下的训练数据,以及(iii)揭示了用于训练引导分类器的数据的信息。代码可在 \url{https://github.com/idansc/discriminative_class_toke