We present a novel method to reconstruct a spectral central view and its aligned disparity map from spatio-spectrally coded light fields. Since we do not reconstruct an intermediate full light field from the coded measurement, we refer to this as principal reconstruction. The coded light fields correspond to those captured by a light field camera in the unfocused design with a spectrally coded microlens array. In this application, the spectrally coded light field camera can be interpreted as a single-shot spectral depth camera. We investigate several multi-task deep learning methods and propose a new auxiliary loss-based training strategy to enhance the reconstruction performance. The results are evaluated using a synthetic as well as a new real-world spectral light field dataset that we captured using a custom-built camera. The results are compared to state-of-the art compressed sensing reconstruction and disparity estimation. We achieve a high reconstruction quality for both synthetic and real-world coded light fields. The disparity estimation quality is on par with or even outperforms state-of-the-art disparity estimation from uncoded RGB light fields.
翻译:我们提出了一个从spatio-光谱编码的光场重建光中央视图及其匹配差异图的新方法。 由于我们没有从编码测量中重建一个中间全光场, 我们将此称为主要重建。 代码光场与光场摄像头所捕捉的光场光场光场相匹配, 带有光谱编码微粒阵列。 在这个应用中, 光谱编码的光场照相机可以被解读为单发光谱深度相机。 我们调查了多个多任务深度的深层学习方法, 并提出了一个新的基于损失的辅助培训战略, 以加强重建绩效。 使用一个合成的和新的真实世界光场光场数据集对结果进行评估, 我们用定制的照相机所捕捉的光。 结果与最先进的压缩感测和差异估计相比。 我们对合成光场和真实世界编码的光场的光场都具有高度的重建质量。 差异估计质量与未编码的 RGB光场的光场相比, 甚至优于不完善的状态差异估计值。