Light field (LF) cameras can record scenes from multiple perspectives, and thus introduce beneficial angular information for image super-resolution (SR). However, it is challenging to incorporate angular information due to disparities among LF images. In this paper, we propose a deformable convolution network (i.e., LF-DFnet) to handle the disparity problem for LF image SR. Specifically, we design an angular deformable alignment module (ADAM) for feature-level alignment. Based on ADAM, we further propose a collect-and-distribute approach to perform bidirectional alignment between the center-view feature and each side-view feature. Using our approach, angular information can be well incorporated and encoded into features of each view, which benefits the SR reconstruction of all LF images. Moreover, we develop a baseline-adjustable LF dataset to evaluate SR performance under different disparity variations. Experiments on both public and our self-developed datasets have demonstrated the superiority of our method. Our LF-DFnet can generate high-resolution images with more faithful details and achieve state-of-the-art reconstruction accuracy. Besides, our LF-DFnet is more robust to disparity variations, which has not been well addressed in literature.
翻译:光场照相机可以从多个角度记录场景,从而为图像超分辨率(SR)引入有益的角信息。然而,由于LF图像之间的差异,将角信息纳入三角信息具有挑战性。在本文中,我们提议建立一个可变化的演进网络(即LF-DFnet),处理LF图像SR的差异问题。具体地说,我们设计了一个可变成形的三角对齐模块(ADAM),用于地差调。根据ADAM,我们进一步提议采用收集和分配方法,在中观特征和每个侧观特征之间进行双向对齐。我们的方法可以很好地将角信息纳入和编码到每种视图的特征中,这有利于SR所有LF图像的重建。此外,我们开发了一个可调控的LF数据集,以评价不同差异下的SR的性能。对公众和我们自制数据集的实验显示了我们的方法的优势。我们的LF-DFnet可以产生高分辨率图像,以更忠实的细节生成高分辨率图像,并实现州-FDF的精确度变化。