In this paper, a novel convolutional neural network (CNN)-based framework is developed for light field reconstruction from a sparse set of views. We indicate that the reconstruction can be efficiently modeled as angular restoration on an epipolar plane image (EPI). The main problem in direct reconstruction on the EPI involves an information asymmetry between the spatial and angular dimensions, where the detailed portion in the angular dimensions is damaged by undersampling. Directly upsampling or super-resolving the light field in the angular dimensions causes ghosting effects. To suppress these ghosting effects, we contribute a novel "blur-restoration-deblur" framework. First, the "blur" step is applied to extract the low-frequency components of the light field in the spatial dimensions by convolving each EPI slice with a selected blur kernel. Then, the "restoration" step is implemented by a CNN, which is trained to restore the angular details of the EPI. Finally, we use a non-blind "deblur" operation to recover the spatial high frequencies suppressed by the EPI blur. We evaluate our approach on several datasets, including synthetic scenes, real-world scenes and challenging microscope light field data. We demonstrate the high performance and robustness of the proposed framework compared with state-of-the-art algorithms. We further show extended applications, including depth enhancement and interpolation for unstructured input. More importantly, a novel rendering approach is presented by combining the proposed framework and depth information to handle large disparities.
翻译:在本文中,开发了一个新的革命神经网络(CNN)基础框架,用于利用零星的一组观点进行光场重建。我们指出,重建可以高效地模拟成在上极平面图像(EPI)上的角复原。 EPI直接重建的主要问题是空间和角维之间的信息不对称,而角维中的详细部分因取样不足而受损。直接抽取或超级解析角维度中的光场造成幻觉效应。为了抑制这些幽灵效应,我们贡献了一个新型的“云层恢复-破坏”框架。首先,“云层”步骤用于提取光场空间维度的低频率组成部分,将每个 EPI 切片与选定模糊的内核部分联系起来。然后,“恢复”步骤由CNN执行,该步骤经过培训,以恢复 EPI 的角维度细节。最后,我们用非盲“deblur” 操作来恢复由 EPI 更甚深层应用所压制的空间高频频度的光度框架。首先, 将高频段的图像与高清晰度图像对比, 我们用一系列的实地数据演示, 演示。我们用若干数据, 展示, 展示, 包括高压的模拟的模拟的模拟的模拟的模拟, 演示。