Recent methods for neural surface representation and rendering, for example NeuS, have demonstrated remarkably high-quality reconstruction of static scenes. However, the training of NeuS takes an extremely long time (8 hours), which makes it almost impossible to apply them to dynamic scenes with thousands of frames. We propose a fast neural surface reconstruction approach, called NeuS2, which achieves two orders of magnitude improvement in terms of acceleration without compromising reconstruction quality. To accelerate the training process, we integrate multi-resolution hash encodings into a neural surface representation and implement our whole algorithm in CUDA. We also present a lightweight calculation of second-order derivatives tailored to our networks (i.e., ReLU-based MLPs), which achieves a factor two speed up. To further stabilize training, a progressive learning strategy is proposed to optimize multi-resolution hash encodings from coarse to fine. In addition, we extend our method for reconstructing dynamic scenes with an incremental training strategy. Our experiments on various datasets demonstrate that NeuS2 significantly outperforms the state-of-the-arts in both surface reconstruction accuracy and training speed. The video is available at https://vcai.mpi-inf.mpg.de/projects/NeuS2/ .
翻译:例如,Neus,最近对神经表面的表示和转换方法显示出对静态场景进行高质量的质量重建。然而,Neus的训练需要非常长的时间(8小时),因此几乎不可能用上千个框架来将其应用到动态场景中。我们提议了一个称为Neus2的快速神经表面重建方法,即Neus2,在加速速度方面实现两个数量级的改进,同时不损害重建质量。为了加快培训进程,我们将多分辨率散射编码纳入神经表面的表示中,并在CUDA中实施我们的全部算法。我们还对适合我们网络的二级衍生物(即基于RELU的MLPs)进行了轻量级的计算,从而达到一个因素2的加速速度。为了进一步稳定培训,我们提议了一个渐进式学习战略,以优化多分辨率的编码,从粗略到精细。此外,我们扩展了我们重建动态场景的方法,以渐进式培训战略。我们在各种数据集上的实验表明,Neus2在地面重建精度和训练速度两方面都大大超越了状态。视频可在 http://v/Nepu/Nepu/Nepu/Nepu/pimu.minstru。