Novel texture synthesis for existing 3D mesh models is an important step towards photo realistic asset generation for existing simulators. But existing methods inherently work in the 2D image space which is the projection of the 3D space from a given camera perspective. These methods take camera angle, 3D model information, lighting information and generate photorealistic 2D image. To generate a photorealistic image from another perspective or lighting, we need to make a computationally expensive forward pass each time we change the parameters. Also, it is hard to generate such images for a simulator that can satisfy the temporal constraints the sequences of images should be similar but only need to change the viewpoint of lighting as desired. The solution can not be directly integrated with existing tools like Blender and Unreal Engine. Manual solution is expensive and time consuming. We thus present a new system called a graph generative adversarial network (GGAN) that can generate textures which can be directly integrated into a given 3D mesh models with tools like Blender and Unreal Engine and can be simulated from any perspective and lighting condition easily.
翻译:对于现有的 3D 网格模型, 现有 3D 网格图案合成是一个重要步骤。 但是, 在 2D 图像空间( 3D 图像空间从特定相机角度投影) 中, 现有的方法必然会发挥作用。 这些方法具有摄像角度、 3D 模型信息、 照明信息, 并产生光真化的 2D 图像。 为了从另一个角度或光学角度生成一个摄影现实图像, 我们每次改变参数时, 都需要让一个计算成本很高的远端图像。 另外, 很难为能够满足时间限制的模拟器生成这样的图像, 这样的模拟器应该类似图像序列, 但只需要改变希望的照明观点。 解决方案不能直接与 Blender 和 不真实的引擎等现有工具整合。 人工解决方案成本昂贵且耗时费。 因此, 我们提出一个新的系统, 叫做图形对抗网络( GGAN), 它可以生成可直接与给定的 3D 3D 网形模型结合, 并且可以轻易地从任何角度和照明条件模拟 。