Can one inject new concepts into an already trained generative model, while respecting its existing structure and knowledge? We propose a new task - domain expansion - to address this. Given a pretrained generator and novel (but related) domains, we expand the generator to jointly model all domains, old and new, harmoniously. First, we note the generator contains a meaningful, pretrained latent space. Is it possible to minimally perturb this hard-earned representation, while maximally representing the new domains? Interestingly, we find that the latent space offers unused, "dormant" directions, which do not affect the output. This provides an opportunity: By "repurposing" these directions, we can represent new domains without perturbing the original representation. In fact, we find that pretrained generators have the capacity to add several - even hundreds - of new domains! Using our expansion method, one "expanded" model can supersede numerous domain-specific models, without expanding the model size. Additionally, a single expanded generator natively supports smooth transitions between domains, as well as composition of domains. Code and project page available at https://yotamnitzan.github.io/domain-expansion/.
翻译:在尊重其现有结构和知识的同时,我们能否将新概念注入一个已经受过训练的基因模型中,同时尊重其现有结构和知识?我们建议一个新的任务 — — 域扩展 — — 来解决这个问题。在经过预先训练的发电机和新(但相关的)小(但相关的)域中,我们扩大发电机,以共同模拟所有新旧领域,和谐地进行。首先,我们注意到发电机包含一个有意义的、经过预先训练的潜伏空间。在最大程度上代表新领域的同时,“扩展”模型能否最小地扰动这一来得不易的表达方式?有趣的是,我们发现潜藏空间提供了未使用的“多曼特”方向,不会影响产出。这提供了一个机会:通过“重新规划”这些方向,我们可以代表新的域,而不会破坏最初的表示方式。事实上,我们发现经过训练的发电机有能力添加数个甚至数百个新域!使用我们的扩展方法,一个“扩展”模型可以取代许多特定域模式,而不会扩大模型的大小。此外,一个单一的扩大的原始生成器可以支持不同域之间的平稳过渡,以及域的构成。 可在 http://yotantan-doantion.