Recent neural radiance field (NeRF) representation has achieved great success in the tasks of novel view synthesis and 3D reconstruction. However, they suffer from the catastrophic forgetting problem when continuously learning from streaming data without revisiting the previous training data. This limitation prohibits the application of existing NeRF models to scenarios where images come in sequentially. In view of this, we explore the task of incremental learning for neural radiance field representation in this work. We first propose a student-teacher pipeline to mitigate the catastrophic forgetting problem. Specifically, we iterate the process of using the student as the teacher at the end of each incremental step and let the teacher guide the training of the student in the next step. In this way, the student network is able to learn new information from the streaming data and retain old knowledge from the teacher network simultaneously. Given that not all information from the teacher network is helpful since it is only trained with the old data, we further introduce a random inquirer and an uncertainty-based filter to filter useful information. We conduct experiments on the NeRF-synthetic360 and NeRF-real360 datasets, where our approach significantly outperforms the baselines by 7.3% and 25.2% in terms of PSNR. Furthermore, we also show that our approach can be applied to the large-scale camera facing-outwards dataset ScanNet, where we surpass the baseline by 60.0% in PSNR.
翻译:最近神经光亮场(NERF)代表在新观点合成和3D重建的任务中取得了巨大成功。然而,当不断从流数据中学习,而不对先前的培训数据进行重新研究时,他们遭受了灾难性的遗忘问题。这一限制禁止将现有的 NERF 模型应用到图像相继生成的情景中。鉴于此,我们探索神经光亮场代表的渐进学习任务。我们首先提出一个师生管道,以减轻灾难性的遗忘问题。具体地说,我们透过在每一个渐进步骤结束时使用学生作为教师的过程,让教师指导下一个步骤的学生培训。这样,学生网络能够从流数据中学习新信息,同时保留教师网络的旧知识。鉴于教师网络的所有信息并非都有用,因为我们只接受过旧数据培训,我们还引入了随机查询器和基于不确定性的过滤器来过滤有用的信息。我们对NERF-合成360和NERF-360数据库的实验,我们的方法在下一个步骤中大大超越了RIS的基线,在25级的基线中,我们也可以将P-25级的基线显示我们比标准。