NeRF synthesizes novel views of a scene with unprecedented quality by fitting a neural radiance field to RGB images. However, NeRF requires querying a deep Multi-Layer Perceptron (MLP) millions of times, leading to slow rendering times, even on modern GPUs. In this paper, we demonstrate that real-time rendering is possible by utilizing thousands of tiny MLPs instead of one single large MLP. In our setting, each individual MLP only needs to represent parts of the scene, thus smaller and faster-to-evaluate MLPs can be used. By combining this divide-and-conquer strategy with further optimizations, rendering is accelerated by three orders of magnitude compared to the original NeRF model without incurring high storage costs. Further, using teacher-student distillation for training, we show that this speed-up can be achieved without sacrificing visual quality.
翻译:NeRF通过将神经亮度场景与 RGB 图像安装成一个神经亮度场景,从而合成出一个空前质量的场景的新观点。 然而, NeRF 需要查询一个千百百次的深度多射线倍增(MLP ), 导致缓慢的交接时间, 甚至现代的GPU 。 在本文中,我们证明,利用数千个微小的MLP而不是一个大的MLP 来实时提供空间是可能的。 在我们的设置中, 每个人的 MLP 只需要代表场景的一部分, 就可以使用更小和更快的评价 MLP 。 通过将这种分裂和征服战略与进一步的优化相结合, 与原始的NERF 模型相比, 以3个数量级的速度加速, 而不产生高昂的存储成本。 此外, 我们利用师生对培训的蒸馏, 显示在不牺牲视觉质量的情况下可以实现这种速度。