Novel photo-realistic texture synthesis is an important task for generating novel scenes, including asset generation for 3D simulations. However, to date, these methods predominantly generate textured objects in 2D space. If we rely on 2D object generation, then we need to make a computationally expensive forward pass each time we change the camera viewpoint or lighting. Recent work that can generate textures in 3D requires 3D component segmentation that is expensive to acquire. In this work, we present a novel conditional generative architecture that we call a graph generative adversarial network (GGAN) that can generate textures in 3D by learning object component information in an unsupervised way. In this framework, we do not need an expensive forward pass whenever the camera viewpoint or lighting changes, and we do not need expensive 3D part information for training, yet the model can generalize to unseen 3D meshes and generate appropriate novel 3D textures. We compare this approach against state-of-the-art texture generation methods and demonstrate that the GGAN obtains significantly better texture generation quality (according to Frechet inception distance). We release our model source code as open source.
翻译:制作新场景的重要任务包括3D模拟资产生成。 然而,迄今为止,这些方法主要在 2D 空间生成纹理对象。 如果我们依赖 2D 对象生成, 那么每当我们改变相机视图或照明时, 我们就需要做一个成本高昂的计算前传。 最近在3D 中生成纹理的工作需要3D 的3D 元构件分割, 这需要昂贵的3D 元构件获取。 在这项工作中, 我们提出了一个新型的有条件的基因结构, 我们称之为图形质谱对抗网络( GGAN ), 通过以不受监督的方式学习对象组件信息来生成 3D 的纹理 。 在此框架中, 我们不需要一个昂贵的 3D 元前传, 我们不需要昂贵的 3D 部件用于培训, 但是模型可以概括为看不见的 3D 片断, 并产生适当的新的 3D 3D 文本 。 我们比较了这个方法, 并显示 GGAN 源源作为开放源 。