Light fields are a type of image data that capture both spatial and angular scene information by recording light rays emitted by a scene from different orientations. In this context, spatial information is defined as features that remain static regardless of perspectives, while angular information refers to features that vary between viewpoints. We propose a novel neural network that, by design, can separate angular and spatial information of a light field. The network represents spatial information using spatial kernels shared among all Sub-Aperture Images (SAIs), and angular information using sets of angular kernels for each SAI. To further improve the representation capability of the network without increasing parameter number, we also introduce angular kernel allocation and kernel tensor decomposition mechanisms. Extensive experiments demonstrate the benefits of information separation: when applied to the compression task, our network outperforms other state-of-the-art methods by a large margin. And angular information can be easily transferred to other scenes for rendering dense views, showing the successful separation and the potential use case for the view synthesis task. We plan to release the code upon acceptance of the paper to encourage further research on this topic.
翻译:光场是一种图像数据,通过记录从不同方向发射到场景的光线来捕捉空间和角度信息。在这个语境下,空间信息是指无论视角如何,仍保持不变的特征,而角度信息则是指在不同视角之间变化的特征。本文提出了一种新颖的神经网络,可以根据设计将光场的角度和空间信息分离出来。该网络使用空间核对所有子孔径图像(SAI)表示空间信息,并使用每个SAI的角度核集合表示角度信息。为了在不增加参数数量的情况下进一步提高网络的表征能力,我们还引入了角度核分配和核张量分解机制。大量的实验证明了信息分离的益处:在压缩任务中,我们的网络表现优于其他最先进的方法。并且角度信息可以轻松地传递到其他场景中以进行渲染密集视角,显示出分离的成功和视觉合成任务的潜在应用。我们计划在论文被接受后发布代码,鼓励进一步研究此主题。