Today's generative models are capable of synthesizing high-fidelity images, but each model specializes on a specific target domain. This raises the need for model merging: combining two or more pretrained generative models into a single unified one. In this work we tackle the problem of model merging, given two constraints that often come up in the real world: (1) no access to the original training data, and (2) without increasing the size of the neural network. To the best of our knowledge, model merging under these constraints has not been studied thus far. We propose a novel, two-stage solution. In the first stage, we transform the weights of all the models to the same parameter space by a technique we term model rooting. In the second stage, we merge the rooted models by averaging their weights and fine-tuning them for each specific domain, using only data generated by the original trained models. We demonstrate that our approach is superior to baseline methods and to existing transfer learning techniques, and investigate several applications.
翻译:今天的基因模型能够综合高贞洁图像,但每个模型都专门针对一个特定的目标领域。这就需要模型合并:将两个或两个以上经过预先训练的基因模型合并成一个单一的统一模型。在这项工作中,我们处理模型合并问题,因为现实世界经常出现两个制约因素:(1) 无法获取原始培训数据,(2) 不增加神经网络的规模。根据我们的知识,迄今为止还没有研究在这些制约因素下合并的模型。我们提出了一个新颖的、两阶段的解决办法。在第一阶段,我们将所有模型的重量转换成同一个参数空间,我们用一种技术来命名根基。在第二阶段,我们通过平均其重量和对每个特定领域进行微调,只使用原始培训模型产生的数据。我们证明我们的方法优于基线方法和现有转让学习技术,并调查几种应用。