Large-scale diffusion models like Stable Diffusion are powerful and find various real-world applications while customizing such models by fine-tuning is both memory and time inefficient. Motivated by the recent progress in natural language processing, we investigate parameter-efficient tuning in large diffusion models by inserting small learnable modules (termed adapters). In particular, we decompose the design space of adapters into orthogonal factors -- the input position, the output position as well as the function form, and perform Analysis of Variance (ANOVA), a classical statistical approach for analyzing the correlation between discrete (design options) and continuous variables (evaluation metrics). Our analysis suggests that the input position of adapters is the critical factor influencing the performance of downstream tasks. Then, we carefully study the choice of the input position, and we find that putting the input position after the cross-attention block can lead to the best performance, validated by additional visualization analyses. Finally, we provide a recipe for parameter-efficient tuning in diffusion models, which is comparable if not superior to the fully fine-tuned baseline (e.g., DreamBooth) with only 0.75 \% extra parameters, across various customized tasks.
翻译:大规模扩散模型(例如稳定扩散)是强大的,并且在各种现实应用中发挥作用,但通过微调来定制此类模型既浪费内存又浪费时间。受自然语言处理方面的最新进展启发,我们通过插入小的可学习模块(称为适配器)来研究大型扩散模型的参数有效微调。特别地,我们将适配器的设计空间分解为正交因素--输入位置、输出位置以及功能形式,并执行方差分析(ANOVA),这是一种用于分析离散变量(设计选项)和连续变量(评估度量)之间相关性的经典统计方法。我们的分析表明,适配器的输入位置是影响下游任务性能的关键因素。然后,我们仔细研究了选择输入位置的方法,发现将输入位置放在跨注意力块之后可以实现最佳性能,这一发现通过额外的可视化分析得到证实。最后,我们提供了一种用于扩散模型参数有效微调的方法,在各种自定义任务中仅使用0.75 %的额外参数与完全微调的基线(例如DreamBooth)相当,甚至更好。