In this paper we present SurfaceNet, an approach for estimating spatially-varying bidirectional reflectance distribution function (SVBRDF) material properties from a single image. We pose the problem as an image translation task and propose a novel patch-based generative adversarial network (GAN) that is able to produce high-quality, high-resolution surface reflectance maps. The employment of the GAN paradigm has a twofold objective: 1) allowing the model to recover finer details than standard translation models; 2) reducing the domain shift between synthetic and real data distributions in an unsupervised way. An extensive evaluation, carried out on a public benchmark of synthetic and real images under different illumination conditions, shows that SurfaceNet largely outperforms existing SVBRDF reconstruction methods, both quantitatively and qualitatively. Furthermore, SurfaceNet exhibits a remarkable ability in generating high-quality maps from real samples without any supervision at training time.
翻译:在本文中,我们介绍了地平线网,这是一种从单一图像中估计空间变化的双向反射分布功能(SVBRDF)物质特性的方法,我们把问题作为一个图像翻译任务提出来,并提出一个新的基于补丁的对抗基因网络(GAN),能够产生高质量的、高分辨率的表面反射图。GAN模式的使用有两个双重目标:(1)允许模型比标准翻译模型更精细地恢复细节;(2)以不受监督的方式减少合成和真实数据分布之间的区域转移。在对不同污染条件下合成和真实图像的公共基准进行的广泛评价表明,地平线网基本上比现有的SVBRDF的定量和定性重建方法都好。此外,地面网在培训时间没有任何监督地从真实样本中生成高质量地图方面表现出了非凡的能力。