Neural Radiance Fields (NeRFs) learn implicit representations of - typically static - environments from images. Our paper extends NeRFs to handle dynamic scenes in an online fashion. We propose ParticleNeRF that adapts to changes in the geometry of the environment as they occur, learning a new up-to-date representation every 350ms. ParticleNeRF can represent the current state of dynamic environments with much higher fidelity compared to other NeRF frameworks. To achieve this, we introduce a new particle-based parametric encoding, which allows the intermediate NeRF features -- now coupled to particles in space - to move with the dynamic geometry. This is possible by backpropagating the photometric reconstruction loss into the position of the particles. The position gradients are interpreted as particle velocities and integrated into positions using a position-based dynamics (PBS) physics system. Introducing PBS into the NeRF formulation allows us to add collision constraints to the particle motion and creates future opportunities to add other movement priors into the system, such as rigid and deformable body constraints. Videos can be found at https://sites.google.com/view/particlenerf.
翻译:神经辐射场(Neoral Radiance Fields) 从图像中学习隐含的 -- -- 通常是静态的 -- -- 环境。我们的论文扩展了 NeRFs, 以在线方式处理动态场景。 我们建议PaterNeRF, 适应环境几何变化, 每350米学习一个新的最新代表。 PelsNeRF 能够代表动态环境的现状, 与其他 NeRF 框架相比, 其真实性要高得多。 为了实现这一点, 我们引入了新的粒子参数编码, 使中间的 NeRF 特征 -- -- 现在与空间中的颗粒相伴 -- -- 能够与动态几何相移。 将光度重建损失反射到粒子的位置是可能的。 位置梯度被解释为粒子速度, 并纳入基于位置的动态物理系统。 将 PBS 引入 NERF 配制, 使我们能够在粒子运动中增加碰撞限制, 并创造未来机会, 将其他运动添加系统之前的动作, 如僵硬和变形体限制。 视频可以在 http://ssetglegles.gleglegleglegleges.