2D images are observations of the 3D physical world depicted with the geometry, material, and illumination components. Recovering these underlying intrinsic components from 2D images, also known as inverse rendering, usually requires a supervised setting with paired images collected from multiple viewpoints and lighting conditions, which is resource-demanding. In this work, we present GAN2X, a new method for unsupervised inverse rendering that only uses unpaired images for training. Unlike previous Shape-from-GAN approaches that mainly focus on 3D shapes, we take the first attempt to also recover non-Lambertian material properties by exploiting the pseudo paired data generated by a GAN. To achieve precise inverse rendering, we devise a specularity-aware neural surface representation that continuously models the geometry and material properties. A shading-based refinement technique is adopted to further distill information in the target image and recover more fine details. Experiments demonstrate that GAN2X can accurately decompose 2D images to 3D shape, albedo, and specular properties for different object categories, and achieves the state-of-the-art performance for unsupervised single-view 3D face reconstruction. We also show its applications in downstream tasks including real image editing and lifting 2D GANs to decomposed 3D GANs.
翻译:2D 图像是用几何、材料和光化组件绘制的 3D 物理世界的观测。 从 2D 图像中恢复这些内在内在组成部分, 也称为反向翻版, 通常需要一个由多个角度和照明条件下收集的配对图像的监管设置, 这是资源需求。 在这项工作中, 我们展示了 GAN2X, 这是一种不受监督的反向方法, 仅将未受监督的图像用于培训。 与以前主要侧重于 3D 形状的 GAN 形状的 形状- 从 GAN 中提取的图像不同, 我们第一次尝试通过利用 GAN 生成的伪配对数据来恢复非Lambertian 材料属性。 为了实现精确的反向转换, 我们设计了一个光度- 有色度- 有色度的神经表面代表, 持续模拟几何和物质属性。 我们采用了一种基于阴影的精度的精度精度精度精度技术, 以进一步提取目标图像中的信息, 并恢复更细的细节。 实验表明 GAN2X 能够将 2D 图像精确地分解到 3D 形状、 和Speal- develview 3D 的图像应用中, 并显示 3AN 。