We present SOLID-Net, a neural network for spatially-varying outdoor lighting estimation from a single outdoor image for any 2D pixel location. Previous work has used a unified sky environment map to represent outdoor lighting. Instead, we generate spatially-varying local lighting environment maps by combining global sky environment map with warped image information according to geometric information estimated from intrinsics. As no outdoor dataset with image and local lighting ground truth is readily available, we introduce the SOLID-Img dataset with physically-based rendered images and their corresponding intrinsic and lighting information. We train a deep neural network to regress intrinsic cues with physically-based constraints and use them to conduct global and local lightings estimation. Experiments on both synthetic and real datasets show that SOLID-Net significantly outperforms previous methods.
翻译:我们介绍了SOLID-Net,这是一个神经网络,用于从任何 2D 像素位置的单一户外图像中进行空间变化式室外照明估计。以前的工作使用统一的天空环境地图来代表户外照明。相反,我们通过将全球天空环境地图与根据内在估计的几何信息进行的扭曲图像信息相结合,制作了空间变化式的当地照明环境地图。由于没有具有图像和当地照明地面真象的户外数据集,我们引入了SOLID-Img数据集,配有基于物理的成像及其相应的内在和照明信息。我们训练了一个深层神经网络,用物理限制反向内线,并利用它们进行全球和地方照明估计。对合成和真实数据集的实验显示,SOLID-Net大大超越了以往的方法。