In this paper, we present a practical and robust deep learning solution for the novel view synthesis of complex scenes. In our approach, a continuous scene is represented as a light field, i.e., a set of rays, each of which has a corresponding color. We adopt a 4D parameterization of the light field. We then formulate the light field as a 4D function that maps 4D coordinates to corresponding color values. We train a deep fully connected network to optimize this function. Then, the scene-specific model is used to synthesize novel views. Previous light field approaches usually require dense view sampling to reliably render high-quality novel views. Our method can render novel views by sampling rays and querying the color for each ray from the network directly; thus enabling fast light field rendering with a very sparse set of input images. Our method achieves state-of-the-art novel view synthesis results while maintaining an interactive frame rate.
翻译:在本文中, 我们为复杂场景的新视角合成展示了一个实用而有力的深层次学习解决方案。 在我们的方法中, 一个连续场景代表着一个光场, 即一组光线, 每个光线都有相应的颜色。 我们采用光场的四维参数化。 我们然后将光场设计成一个四维函数, 绘制四维相匹配的相匹配的颜色值。 我们训练了一个完全连接的网络, 以优化此功能。 然后, 将一个完全连接的网络用于合成新观点。 以往的光场方法通常需要密集的视图取样, 才能可靠地生成高质量的新观点。 我们的方法可以直接通过取样和查询网络上每条光线的颜色来提供新的观点; 从而使得快速光场能够以非常稀少的一组输入图像来显示。 我们的方法在保持互动框架率的同时, 实现了最先进的新视角合成结果 。