We propose the first approach for the decomposition of a monocular color video into direct and indirect illumination components in real time. We retrieve, in separate layers, the contribution made to the scene appearance by the scene reflectance, the light sources and the reflections from various coherent scene regions to one another. Existing techniques that invert global light transport require image capture under multiplexed controlled lighting, or only enable the decomposition of a single image at slow off-line frame rates. In contrast, our approach works for regular videos and produces temporally coherent decomposition layers at real-time frame rates. At the core of our approach are several sparsity priors that enable the estimation of the per-pixel direct and indirect illumination layers based on a small set of jointly estimated base reflectance colors. The resulting variational decomposition problem uses a new formulation based on sparse and dense sets of non-linear equations that we solve efficiently using a novel alternating data-parallel optimization strategy. We evaluate our approach qualitatively and quantitatively, and show improvements over the state of the art in this field, in both quality and runtime. In addition, we demonstrate various real-time appearance editing applications for videos with consistent illumination.
翻译:我们提出了将单色视频实时分解成直接和间接照明组成部分的第一种方法。 我们在不同层次上检索了现场反射、光源和不同连贯的场景区域对现场外观的贡献; 现有技术,即倒转全球轻飘移需要在多氧化控制下光照下捕捉图像,或只能使单一图像以缓慢离线框架速率分解。 相比之下,我们的方法是定期录制视频,并按实时框架速率产生时间上一致的分解层。 我们的方法的核心是若干个散射前科,这些前科能够根据一组小的一组共同估计基底反射颜色估算每像直接和间接的分解层。 由此产生的变异变分解问题使用了一种基于零星和密集的非线性方程式的新配方,我们用一种新的交替数据-平行优化战略来有效解析。 我们从质量和数量上评价了我们的方法,并展示了本领域在质量和运行时程两方面的艺术状况的改进。 此外,我们用各种真实的图像展示了各种真实的外观,以便进行连续的编辑。