Human relighting is a highly desirable yet challenging task. Existing works either require expensive one-light-at-a-time (OLAT) captured data using light stage or cannot freely change the viewpoints of the rendered body. In this work, we propose a principled framework, Relighting4D, that enables free-viewpoints relighting from only human videos under unknown illuminations. Our key insight is that the space-time varying geometry and reflectance of the human body can be decomposed as a set of neural fields of normal, occlusion, diffuse, and specular maps. These neural fields are further integrated into reflectance-aware physically based rendering, where each vertex in the neural field absorbs and reflects the light from the environment. The whole framework can be learned from videos in a self-supervised manner, with physically informed priors designed for regularization. Extensive experiments on both real and synthetic datasets demonstrate that our framework is capable of relighting dynamic human actors with free-viewpoints.
翻译:人类光照是一项非常可取但又具有挑战性的任务。 现有的工程要么需要使用光级的一次性光时(OLAT)捕获的昂贵数据,要么无法自由改变被授体的观点。 在这一工作中,我们提议了一个原则性框架,即 " 光照4D ",允许自由观察点在未知的光线下从仅有的人类视频中点亮光。我们的关键洞察力是,空间-时间差异的几何和人体的反射可以分解成一套正常、隐蔽、扩散和视觉地图的神经领域。这些神经领域被进一步整合到反射认知物理的外貌中,在这个环境中,神经领域的每个脊椎都吸收并反映来自环境的光线。整个框架可以以自我监督的方式从视频中学习,并用实际知情的前程来进行规范。 对真实和合成数据集进行的广泛实验表明,我们的框架能够以自由视点重新点为动态的人类行为者。