Learning radiance fields has shown remarkable results for novel view synthesis. The learning procedure usually costs lots of time, which motivates the latest methods to speed up the learning procedure by learning without neural networks or using more efficient data structures. However, these specially designed approaches do not work for most of radiance fields based methods. To resolve this issue, we introduce a general strategy to speed up the learning procedure for almost all radiance fields based methods. Our key idea is to reduce the redundancy by shooting much fewer rays in the multi-view volume rendering procedure which is the base for almost all radiance fields based methods. We find that shooting rays at pixels with dramatic color change not only significantly reduces the training burden but also barely affects the accuracy of the learned radiance fields. In addition, we also adaptively subdivide each view into a quadtree according to the average rendering error in each node in the tree, which makes us dynamically shoot more rays in more complex regions with larger rendering error. We evaluate our method with different radiance fields based methods under the widely used benchmarks. Experimental results show that our method achieves comparable accuracy to the state-of-the-art with much faster training.
翻译:学习辐射分布已经在新视点合成方面显示出了显著的结果。学习过程通常需要大量时间,这促使最新的方法通过不使用神经网络或使用更高效的数据结构来加速学习过程。然而,这些专门设计的方法对于大多数基于辐射分布的方法都不起作用。为了解决这个问题,我们引入了一种通用策略,以加速几乎所有基于辐射分布的方法的学习过程。我们的关键思路是通过在多视图体绘制程序中只投射更少的射线来减少冗余。我们发现,对颜色变化剧烈的像素投射光线不仅显著减轻了训练负担,而且几乎不影响所学习的辐射分布的准确性。此外,我们还根据树中每个节点的平均渲染误差自适应地将每个视图分成四叉树,从而使我们在具有较大渲染误差的更复杂区域动态地投射更多的光线。我们使用不同的基于辐射分布的方法在广泛使用的基准测试中评估我们的方法。实验结果表明,我们的方法在更快的训练速度下实现了与最先进的技术相当的准确性。