Can one inject new concepts into an already trained generative model, while respecting its existing structure and knowledge? We propose a new task - domain expansion - to address this. Given a pretrained generator and novel (but related) domains, we expand the generator to jointly model all domains, old and new, harmoniously. First, we note the generator contains a meaningful, pretrained latent space. Is it possible to minimally perturb this hard-earned representation, while maximally representing the new domains? Interestingly, we find that the latent space offers unused, "dormant" directions, which do not affect the output. This provides an opportunity: By "repurposing" these directions, we can represent new domains without perturbing the original representation. In fact, we find that pretrained generators have the capacity to add several - even hundreds - of new domains! Using our expansion method, one "expanded" model can supersede numerous domain-specific models, without expanding the model size. Additionally, a single expanded generator natively supports smooth transitions between domains, as well as composition of domains. Code and project page available at https://yotamnitzan.github.io/domain-expansion/.
翻译:能否将新的概念注入已经训练好的生成模型中,同时尊重其现有的结构和知识?我们提出了一个新的任务——领域扩展——来解决这个问题。给定预训练的生成器和新颖(但相关)的领域,我们将扩展生成器以在所有领域之间联合建模,既包括旧领域也包括新领域,以实现和谐。首先,我们注意到生成器包含一个有意义的、预训练的潜空间。我们能否最小地扰动这个艰辛获得的表示,同时最大限度地表示新领域呢?有趣的是,我们发现潜在空间提供了未使用的“休眠”方向,这些方向不会影响输出。这提供了一个机会:通过“重新利用”这些方向,我们可以表示新的领域,而不需要干扰原始表示。事实上,我们发现预训练的生成器有能力添加几个——甚至数百个——新的领域!使用我们的扩展方法,一个“扩展”模型可以取代众多特定领域的模型,而不需要扩大模型的大小。此外,一个单独的扩展生成器本身支持领域之间的平滑转换,以及领域的组合。代码和项目页面可在 https://yotamnitzan.github.io/domain-expansion/ 上找到。