Inverse rendering is an ill-posed problem. Previous work has sought to resolve this by focussing on priors for object or scene shape or appearance. In this work, we instead focus on a prior for natural illuminations. Current methods rely on spherical harmonic lighting or other generic representations and, at best, a simplistic prior on the parameters. We propose a conditional neural field representation based on a variational auto-decoder with a SIREN network and, extending Vector Neurons, build equivariance directly into the network. Using this, we develop a rotation-equivariant, high dynamic range (HDR) neural illumination model that is compact and able to express complex, high-frequency features of natural environment maps. Training our model on a curated dataset of 1.6K HDR environment maps of natural scenes, we compare it against traditional representations, demonstrate its applicability for an inverse rendering task and show environment map completion from partial observations. A PyTorch implementation, our dataset and trained models can be found at jadgardner.github.io/RENI.
翻译:反向转换是一个错误的问题。 先前的工作试图解决这个问题, 重点是对象或场景形状或外观的前置物。 在这项工作中, 我们偏重自然光照的前置物。 目前的方法依靠球形的调频照明或其他通用表达方式, 最好在参数之前简单化。 我们提出一个有条件的神经场代表方式, 其依据是使用SIREN网络的变式自动解码器, 并扩展矢量中子, 将等量直接建入网络。 使用这个方式, 我们开发了一个旋转- 等离差、 高动态射线( HDR) 神经照明模型, 它可以压缩并能表达自然环境图的复杂、 高频度特征 。 培训我们的模型, 将1.6K 人文时的自然场景环境地图拼凑成数据集, 我们将其与传统表达方式进行比较, 表明它适用于反向任务, 并显示环境地图完成部分观察的结果。 使用 PyTorch, 我们的数据集和经过训练的模型可以在 jadgardner. github.io/ RENI 。