Hinged on the representation power of neural networks, neural radiance fields (NeRF) have recently emerged as one of the promising and widely applicable methods for 3D object and scene representation. However, NeRF faces challenges in practical applications, such as large-scale scenes and edge devices with a limited amount of memory, where data needs to be processed sequentially. Under such incremental learning scenarios, neural networks are known to suffer catastrophic forgetting: easily forgetting previously seen data after training with new data. We observe that previous incremental learning algorithms are limited by either low performance or memory scalability issues. As such, we develop a Memory-Efficient Incremental Learning algorithm for NeRF (MEIL-NeRF). MEIL-NeRF takes inspiration from NeRF itself in that a neural network can serve as a memory that provides the pixel RGB values, given rays as queries. Upon the motivation, our framework learns which rays to query NeRF to extract previous pixel values. The extracted pixel values are then used to train NeRF in a self-distillation manner to prevent catastrophic forgetting. As a result, MEIL-NeRF demonstrates constant memory consumption and competitive performance.
翻译:在神经网络代表力方面,神经弧度场(NERF)最近成为3D对象和场景展示的有希望和广泛应用的方法之一,但是,NERF在实际应用方面面临着挑战,例如大型场景和记忆量有限的边缘装置,需要按顺序处理数据。在这种渐进式学习情景下,神经网络已知会遭受灾难性的遗忘:在接受新数据培训后很容易忘记以往看到的数据。我们观察到,以往的递增学习算法受到性能低或记忆可缩放问题的限制。因此,我们为NERF开发了一种记忆-有效递增学习算法(MEIL-NERF)。MEIL-NERF从NERF本身获得灵感,因为神经网络可以作为提供像素 RGB 值的记忆,因为有射线作为询问。根据动机,我们的框架学习了向NERF 线线查询以前的像素值。提取的像素值随后用提取的像素值来训练NERF,以防止灾难性的遗忘。作为结果,ME-RF的不断表现。