Neural volumetric representations have become a widely adopted model for radiance fields in 3D scenes. These representations are fully implicit or hybrid function approximators of the instantaneous volumetric radiance in a scene, which are typically learned from multi-view captures of the scene. We investigate the new task of neural volume super-resolution - rendering high-resolution views corresponding to a scene captured at low resolution. To this end, we propose a neural super-resolution network that operates directly on the volumetric representation of the scene. This approach allows us to exploit an advantage of operating in the volumetric domain, namely the ability to guarantee consistent super-resolution across different viewing directions. To realize our method, we devise a novel 3D representation that hinges on multiple 2D feature planes. This allows us to super-resolve the 3D scene representation by applying 2D convolutional networks on the 2D feature planes. We validate the proposed method's capability of super-resolving multi-view consistent views both quantitatively and qualitatively on a diverse set of unseen 3D scenes, demonstrating a significant advantage over existing approaches.
翻译:3D 场景中,神经体积表示法已成为一个广泛采用的3D 场景中弧度场景模型。这些表示法是场景中瞬时体积弧度的完全隐含或混合功能近似功能,通常从多角度捕捉场景中学习。我们调查神经体积超分辨率的新任务,对低分辨率捕捉的场景产生高清晰度的意见。为此,我们提议建立一个神经超分辨率网络,直接在场景体积表示法上运作。这个方法使我们能够利用在体积领域操作的优势,即能够保证不同观察方向的一致超级分辨率。为了实现我们的方法,我们设计了一个新型的3D代表法,它以多维2D 地貌平面为主。这使我们能够在2D 地貌平面上应用2D 革命网络,从而超级解析3D 场景代表法。我们验证了拟议方法的能力,即从数量上和定性上解析多维观的多维度观点,在不同的3D 3D 场景场景中表现出显著的优势。