We explore a new strategy for few-shot novel view synthesis based on a neural light field representation. Given a target camera pose, an implicit neural network maps each ray to its target pixel's color directly. The network is conditioned on local ray features generated by coarse volumetric rendering from an explicit 3D feature volume. This volume is built from the input images using a 3D ConvNet. Our method achieves competitive performances on synthetic and real MVS data with respect to state-of-the-art neural radiance field based competition, while offering a 100 times faster rendering.
翻译:我们探索基于神经光场演示的微小新视角合成新策略。 如果有目标相机, 隐含的神经网络将每个射线绘制成目标像素的颜色。 网络以3D特性的清晰量体积粗化生成的局部射线特征为条件。 该卷是使用 3D ConvNet 输入图像构建的。 我们的方法在基于最先进的神经光谱场的竞争中, 在合成和真实的 MVS 数据上取得了竞争性的性能, 同时提供了100倍的速率。