We introduce a method to render Neural Radiance Fields (NeRFs) in real time using PlenOctrees, an octree-based 3D representation which supports view-dependent effects. Our method can render 800x800 images at more than 150 FPS, which is over 3000 times faster than conventional NeRFs. We do so without sacrificing quality while preserving the ability of NeRFs to perform free-viewpoint rendering of scenes with arbitrary geometry and view-dependent effects. Real-time performance is achieved by pre-tabulating the NeRF into a PlenOctree. In order to preserve view-dependent effects such as specularities, we factorize the appearance via closed-form spherical basis functions. Specifically, we show that it is possible to train NeRFs to predict a spherical harmonic representation of radiance, removing the viewing direction as an input to the neural network. Furthermore, we show that PlenOctrees can be directly optimized to further minimize the reconstruction loss, which leads to equal or better quality compared to competing methods. Moreover, this octree optimization step can be used to reduce the training time, as we no longer need to wait for the NeRF training to converge fully. Our real-time neural rendering approach may potentially enable new applications such as 6-DOF industrial and product visualizations, as well as next generation AR/VR systems. PlenOctrees are amenable to in-browser rendering as well; please visit the project page for the interactive online demo, as well as video and code: https://alexyu.net/plenoctrees
翻译:我们引入了一种方法, 实时将神经辐射场( NeRFs) 实时转换为使用 PlenOctree 的基于 Octree 的 3D 代表, 支持视觉依赖效应。 我们的方法可以将800x800 图像转换为150 FPS, 比常规 NERF 更快3000倍以上。 我们这样做并不牺牲质量, 同时保留 NERFs 进行任意几何学和视景依赖效果的场景自由观察点展示的能力。 实时性能可以通过将 NERFs 预制成一个 PleenOctree 实现。 为了保存直观效应, 例如视觉, 我们可以通过闭板球球球基功能将外观因素化为800x800 800 图像。 具体地说, 我们的方法可以将 NERFs 训练NRFs 用于预测光亮度的球体代表, 将视图方向作为神经网络的输入。 此外, 我们显示, PlenOcrealOs 可以直接优化 将重建损失进一步降低, 与竞争方法相同或质量 。 此外,, 将O- Real- frealrealrealreal real real real real real repress 最接近我们需要成为新的的系统。