While current multi-frame restoration methods combine information from multiple input images using 2D alignment techniques, recent advances in novel view synthesis are paving the way for a new paradigm relying on volumetric scene representations. In this work, we introduce the first 3D-based multi-frame denoising method that significantly outperforms its 2D-based counterparts with lower computational requirements. Our method extends the multiplane image (MPI) framework for novel view synthesis by introducing a learnable encoder-renderer pair manipulating multiplane representations in feature space. The encoder fuses information across views and operates in a depth-wise manner while the renderer fuses information across depths and operates in a view-wise manner. The two modules are trained end-to-end and learn to separate depths in an unsupervised way, giving rise to Multiplane Feature (MPF) representations. Experiments on the Spaces and Real Forward-Facing datasets as well as on raw burst data validate our approach for view synthesis, multi-frame denoising, and view synthesis under noisy conditions.
翻译:Translated abstract:
尽管当前的多帧恢复方法使用2D对齐技术从多个输入图像中组合信息,但新颖视角合成的最新进展正在铺平一条依赖体积场表示的道路。在本文中,我们介绍了一种首个基于3D的多帧去噪方法,其计算需求更低,性能显著优于基于2D的方法。我们的方法扩展了多平面图像(MPI)框架,通过引入一个可学习的编码器-渲染器对,在特征空间中操作多平面表示。编码器在深度上融合多视角信息,渲染器在视角上融合深度信息。这两个模块被端对端训练并学习以无监督方式分隔深度,产生了多平面特征(MPF)表示。在Spaces和Real Forward-Facing数据集以及原始连拍数据上的实验证明了我们的方法在视角合成、多帧去噪和嘈杂条件下的视角合成方面的有效性。