Reflectance Transformation Imaging (RTI) is a popular technique that allows the recovery of per-pixel reflectance information by capturing an object under different light conditions. This can be later used to reveal surface details and interactively relight the subject. Such process, however, typically requires dedicated hardware setups to recover the light direction from multiple locations, making the process tedious when performed outside the lab. We propose a novel RTI method that can be carried out by recording videos with two ordinary smartphones. The flash led-light of one device is used to illuminate the subject while the other captures the reflectance. Since the led is mounted close to the camera lenses, we can infer the light direction for thousands of images by freely moving the illuminating device while observing a fiducial marker surrounding the subject. To deal with such amount of data, we propose a neural relighting model that reconstructs object appearance for arbitrary light directions from extremely compact reflectance distribution data compressed via Principal Components Analysis (PCA). Experiments shows that the proposed technique can be easily performed on the field with a resulting RTI model that can outperform state-of-the-art approaches involving dedicated hardware setups.
翻译:反射变像成像(RTI)是一种流行技术,通过在不同光条件下捕捉物体,可以恢复每个像素反射信息,这种技术后来可用于显示表面细节和交互照亮主题。但这一过程通常需要专门的硬件设置,以便从多个地点恢复光向,使过程在实验室外进行时变得枯燥。我们建议一种新颖的RTI方法,可以通过用两个普通智能手机录制视频来进行。一个装置的闪光引导光灯用于在另一个装置捕捉反射时照亮对象。由于铅安装在镜头附近,我们可以通过自由移动发光装置来推断数千幅图像的光向,同时观察围绕主题的粉状标记。为了处理这类数量的数据,我们提议一个神经照明模型,将物体的外观从极紧凑的反射分布数据中从通过主构件分析压缩(PCA)来重建。实验表明,拟议的技术可以很容易在现场进行,因此产生的 RTII 模型可以超越专用的硬件定型方法。