We present SeamlessGAN, a method capable of automatically generating tileable texture maps from a single input exemplar. In contrast to most existing methods, focused solely on solving the synthesis problem, our work tackles both problems, synthesis and tileability, simultaneously. Our key idea is to realize that tiling a latent space within a generative network trained using adversarial expansion techniques produces outputs with continuity at the seam intersection that can be then be turned into tileable images by cropping the central area. Since not every value of the latent space is valid to produce high-quality outputs, we leverage the discriminator as a perceptual error metric capable of identifying artifact-free textures during a sampling process. Further, in contrast to previous work on deep texture synthesis, our model is designed and optimized to work with multi-layered texture representations, enabling textures composed of multiple maps such as albedo, normals, etc. We extensively test our design choices for the network architecture, loss function and sampling parameters. We show qualitatively and quantitatively that our approach outperforms previous methods and works for textures of different types.
翻译:我们展示了能够从单一输入示例中自动生成可换质质图的SeemlessGAN。 与大多数现有方法不同, 我们的工作完全侧重于解决合成问题, 我们的工作同时解决了问题、 合成和可换性。 我们的关键想法是认识到, 在使用对抗扩张技术培训的基因化网络中铺设一个潜伏的空间, 会在接缝交叉点产生连续的输出, 然后通过裁剪中央区域, 将之转换成可动图象。 由于没有每一个潜在空间的价值都能够有效产生高质量的产出, 我们利用歧视者作为概念错误衡量标准, 能够在取样过程中识别无物质质质质。 此外, 与以前关于深质化合成的工作相比, 我们的模型在设计上和上优化了多层纹理表达方式, 使由多层地图组成的纹理, 如高温、常态等能够被转换成可动图象。 我们广泛测试了网络结构的设计选择、 损失函数和取样参数。 我们从质量和数量上显示, 我们的方法超越了不同类型纹理的以往的方法和工作。