The abundant spatial and angular information from light fields has allowed the development of multiple disparity estimation approaches. However, the acquisition of light fields requires high storage and processing cost, limiting the use of this technology in practical applications. To overcome these drawbacks, the compressive sensing (CS) theory has allowed the development of optical architectures to acquire a single coded light field measurement. This measurement is decoded using an optimization algorithm or deep neural network that requires high computational costs. The traditional approach for disparity estimation from compressed light fields requires first recovering the entire light field and then a post-processing step, thus requiring long times. In contrast, this work proposes a fast disparity estimation from a single compressed measurement by omitting the recovery step required in traditional approaches. Specifically, we propose to jointly optimize an optical architecture for acquiring a single coded light field snapshot and a convolutional neural network (CNN) for estimating the disparity maps. Experimentally, the proposed method estimates disparity maps comparable with those obtained from light fields reconstructed using deep learning approaches. Furthermore, the proposed method is 20 times faster in training and inference than the best method that estimates the disparity from reconstructed light fields.
翻译:光场的丰富空间和角信息使得能够制定多种差异估计方法,然而,光场的获取需要高的储存和处理成本,限制了在实际应用中使用这一技术。为了克服这些缺陷,压缩遥感理论允许开发光学结构,以获得单一的编码光场测量。这种测量使用优化算法或需要高计算成本的深神经网络进行解码。从压缩光场进行差异估计的传统方法首先需要恢复整个光场,然后是后处理步骤,因此需要很长的时间。相比之下,这项工作建议从单一压缩测量中快速估算差异,省略传统方法所要求的恢复步骤。具体地说,我们提议共同优化光学结构,以获得单一的编码光场光场截图和动态神经网络,以估计差异图。实验性地,拟议的方法估计差异图与利用深层学习方法重建的光场的相近。此外,拟议方法在培训和推论方面比估计重建光场差异的最佳方法要快20倍。