Deep generative models make visual content creation more accessible to novice users by automating the synthesis of diverse, realistic content based on a collected dataset. However, the current machine learning approaches miss a key element of the creative process -- the ability to synthesize things that go far beyond the data distribution and everyday experience. To begin to address this issue, we enable a user to "warp" a given model by editing just a handful of original model outputs with desired geometric changes. Our method applies a low-rank update to a single model layer to reconstruct edited examples. Furthermore, to combat overfitting, we propose a latent space augmentation method based on style-mixing. Our method allows a user to create a model that synthesizes endless objects with defined geometric changes, enabling the creation of a new generative model without the burden of curating a large-scale dataset. We also demonstrate that edited models can be composed to achieve aggregated effects, and we present an interactive interface to enable users to create new models through composition. Empirical measurements on multiple test cases suggest the advantage of our method against recent GAN fine-tuning methods. Finally, we showcase several applications using the edited models, including latent space interpolation and image editing.
翻译:深基因模型使视觉内容的创建更容易为新用户所接受,方法是根据所收集的数据集对不同、现实内容的合成进行自动化。然而,目前的机器学习方法忽略了创造性过程的一个关键要素 -- -- 合成远远超出数据分布和日常经验的事物的能力。为了开始解决这一问题,我们让用户能够通过编辑少数具有理想几何变化的原始模型产出来“扭曲”某一模型。我们的方法对一个单一模型层进行低级别更新,以重建编辑的示例。此外,为了克服过度配置,我们建议了一种基于样式混合的潜在空间增强方法。我们的方法允许用户创建一种模型,以合成无穷无尽的物体,同时进行定义的几何几何变化,从而能够创建一个新的基因模型,而不必承担整理大型数据集的负担。我们还演示了经过编辑的模型可以构成实现综合效应,我们展示了一个互动界面,使用户能够通过组合创建新模型。关于多个测试案例的精度测量显示我们的方法相对于最近的GAN微调整方法的优势。最后,我们展示了几种应用的应用程序,包括深层空间之间的图像编辑模型。