Can a user create a deep generative model by sketching a single example? Traditionally, creating a GAN model has required the collection of a large-scale dataset of exemplars and specialized knowledge in deep learning. In contrast, sketching is possibly the most universally accessible way to convey a visual concept. In this work, we present a method, GAN Sketching, for rewriting GANs with one or more sketches, to make GANs training easier for novice users. In particular, we change the weights of an original GAN model according to user sketches. We encourage the model's output to match the user sketches through a cross-domain adversarial loss. Furthermore, we explore different regularization methods to preserve the original model's diversity and image quality. Experiments have shown that our method can mold GANs to match shapes and poses specified by sketches while maintaining realism and diversity. Finally, we demonstrate a few applications of the resulting GAN, including latent space interpolation and image editing.
翻译:用户能否通过绘制单一例子来创建深层次的基因模型? 传统上, 创建 GAN 模型需要收集大型的示例数据集和深层学习中的专门知识。 相反, 草图可能是传递视觉概念的最普及的方式。 在这项工作中, 我们提出了一个方法, GAN Sketing, 用一个或多个草图来重写 GAN, 使GAN 培训更容易为新用户使用。 特别是, 我们根据用户的草图来改变原GAN 模型的重量。 我们鼓励模型的输出通过交叉面对称的对称损失来匹配用户的草图。 此外, 我们探索不同的正规化方法来保护原始模型的多样性和图像质量。 实验显示, 我们的方法可以将GAN 形状和素描形状相匹配, 同时保持现实主义和多样性。 最后, 我们展示了由此产生的GAN 的几种应用, 包括暗地空间内插和图像编辑。