While existing Neural Radiance Fields (NeRFs) for dynamic scenes are offline methods with an emphasis on visual fidelity, our paper addresses the online use case that prioritises real-time adaptability. We present ParticleNeRF, a new approach that dynamically adapts to changes in the scene geometry by learning an up-to-date representation online, every 200ms. ParticleNeRF achieves this using a novel particle-based parametric encoding. We couple features to particles in space and backpropagate the photometric reconstruction loss into the particles' position gradients, which are then interpreted as velocity vectors. Governed by a lightweight physics system to handle collisions, this lets the features move freely with the changing scene geometry. We demonstrate ParticleNeRF on various dynamic scenes containing translating, rotating, articulated, and deformable objects. ParticleNeRF is the first online dynamic NeRF and achieves fast adaptability with better visual fidelity than brute-force online InstantNGP and other baseline approaches on dynamic scenes with online constraints. Videos of our system can be found at our project website https://sites.google.com/view/particlenerf.
翻译:近期的神经辐射场(NeRF)方法一般是离线的,着重于视觉保真度,对于实时性的要求欠缺研究。本篇文章关注在线动态场景的使用情形,提出了一种新方法——粒子神经辐射场(ParticleNeRF),动态适应场景几何的变化,每200毫秒学习最新的建模。该方法采用一种基于粒子的参数化编码方式,将特征映射到空间中的粒子上,并将图像重建误差反向传播到粒子的位置梯度上,作为其速度积分。同时,我们采用轻量级的物理系统来处理碰撞,使得特征可以跟随场景的变化而自由移动。我们将ParticleNeRF应用于包含平移、旋转、关节运动和形变物体的不同动态场景,并展示了其比在线InstantNGP和其他基线方法更好的物理适应性和视觉保真度。我们还提供了项目网站https://sites.google.com/view/particlenerf上的演示视频。