Recent works have shown exciting results in unsupervised image de-rendering -- learning to decompose 3D shape, appearance, and lighting from single-image collections without explicit supervision. However, many of these assume simplistic material and lighting models. We propose a method, termed RADAR, that can recover environment illumination and surface materials from real single-image collections, relying neither on explicit 3D supervision, nor on multi-view or multi-light images. Specifically, we focus on rotationally symmetric artefacts that exhibit challenging surface properties including specular reflections, such as vases. We introduce a novel self-supervised albedo discriminator, which allows the model to recover plausible albedo without requiring any ground-truth during training. In conjunction with a shape reconstruction module exploiting rotational symmetry, we present an end-to-end learning framework that is able to de-render the world's revolutionary artefacts. We conduct experiments on a real vase dataset and demonstrate compelling decomposition results, allowing for applications including free-viewpoint rendering and relighting.
翻译:近期的作品显示,在未经监督的图像脱发方面产生了令人兴奋的结果 -- -- 学会在没有明确监督的情况下将3D形状、外观和光照从单一图像收藏中分解为3D形状、外观和照明,但其中许多假设了简单的材料和照明模型。我们提出了一种方法,称为RADAR,它可以从真实的单一图像收藏中恢复环境照明和表面材料,既不能依靠明确的3D监督,也不能依靠多视图或多光图像。具体地说,我们侧重于那些具有挑战性地表特性的对称工艺品,包括血管等镜像反射。我们引入了一种新的自我监督的反射法,使模型在不需要任何地面图象的情况下在培训中恢复合理的反射。我们提出一个形状重建模块,利用旋转的对称,我们提出一个端到端学习框架,能够分解世界的革命工艺品。我们在真实的 vase 数据集上进行实验,并展示令人信服的解剖结果,允许应用包括自由视图的显示和重新点显示。