While generative models produce high-quality images of concepts learned from a large-scale database, a user often wishes to synthesize instantiations of their own concepts (for example, their family, pets, or items). Can we teach a model to quickly acquire a new concept, given a few examples? Furthermore, can we compose multiple new concepts together? We propose Custom Diffusion, an efficient method for augmenting existing text-to-image models. We find that only optimizing a few parameters in the text-to-image conditioning mechanism is sufficiently powerful to represent new concepts while enabling fast tuning (~6 minutes). Additionally, we can jointly train for multiple concepts or combine multiple fine-tuned models into one via closed-form constrained optimization. Our fine-tuned model generates variations of multiple, new concepts and seamlessly composes them with existing concepts in novel settings. Our method outperforms several baselines and concurrent works, regarding both qualitative and quantitative evaluations, while being memory and computationally efficient.
翻译:虽然基因模型产生从大型数据库中学习的概念的高质量图像,但用户往往希望综合其自身概念(例如,他们的家庭、宠物或物品)的即时反应。 我们能否教一个模型来迅速获得一个新概念, 举几个例子? 此外, 我们能否共同形成多个新概念? 我们提议自定义传播, 这是一种扩大现有文本到图像模型的有效方法。 我们发现, 文本到图像调节机制中只有优化几个参数, 才能在快速调控的同时代表新概念( ~ 6分钟 ) 。 此外, 我们可以联合训练多个概念, 或者通过封闭式限制优化将多个微调模型合并成一个模型。 我们的微调模型产生多种新概念的变异, 并将这些概念与新式环境中的现有概念无缝地组合在一起。 我们的方法在质量和数量评估两方面都超越了几个基线和并行工作, 同时具有记忆和计算效率。