Large-scale diffusion models like Stable Diffusion are powerful and find various real-world applications while customizing such models by fine-tuning is both memory and time inefficient. Motivated by the recent progress in natural language processing, we investigate parameter-efficient tuning in large diffusion models by inserting small learnable modules (termed adapters). In particular, we decompose the design space of adapters into orthogonal factors -- the input position, the output position as well as the function form, and perform Analysis of Variance (ANOVA), a classical statistical approach for analyzing the correlation between discrete (design options) and continuous variables (evaluation metrics). Our analysis suggests that the input position of adapters is the critical factor influencing the performance of downstream tasks. Then, we carefully study the choice of the input position, and we find that putting the input position after the cross-attention block can lead to the best performance, validated by additional visualization analyses. Finally, we provide a recipe for parameter-efficient tuning in diffusion models, which is comparable if not superior to the fully fine-tuned baseline (e.g., DreamBooth) with only 0.75 \% extra parameters, across various customized tasks.
翻译:大规模的Diffusion模型,例如稳定Diffusion,具有强大的能力并在各种实际应用中得到了应用。针对这种模型的定制化微调是存储和时间低效的。受自然语言处理领域的最新进展的启发,我们通过插入小的可学习模块(称为适配器),研究了大型Diffusion模型中的参数高效调整。特别是,我们将适配器的设计空间分解为正交因子--输入位置、输出位置以及函数形式,并执行方差分析(ANOVA),这是一种用于分析离散(设计选项)和连续变量(评估度量)之间相关性的经典统计方法。我们的分析表明,适配器的输入位置是影响下游任务性能的关键因素。然后,我们仔细研究了输入位置的选择,并发现将输入位置放在交叉注意力块之后可以导致最佳性能,这得到了附加可视化分析的验证。最后,我们提供了一个在Diffusion模型中参数高效调整的配方,在各种定制任务中,它只增加了0.75%的额外参数,与完全微调的基准线(例如DreamBooth)具有可比性,甚至更优。