We present Neural Articulated Radiance Field (NARF), a novel deformable 3D representation for articulated objects learned from images. While recent advances in 3D implicit representation have made it possible to learn models of complex objects, learning pose-controllable representations of articulated objects remains a challenge, as current methods require 3D shape supervision and are unable to render appearance. In formulating an implicit representation of 3D articulated objects, our method considers only the rigid transformation of the most relevant object part in solving for the radiance field at each 3D location. In this way, the proposed method represents pose-dependent changes without significantly increasing the computational complexity. NARF is fully differentiable and can be trained from images with pose annotations. Moreover, through the use of an autoencoder, it can learn appearance variations over multiple instances of an object class. Experiments show that the proposed method is efficient and can generalize well to novel poses. We make the code, model and demo available for research purposes at https://github.com/nogu-atsu/NARF
翻译:我们提出了从图像中学习的立维隐含表达式最近的进展使得能够学习复杂对象的模型,但学习可控的立体物体的表达式仍是一项挑战,因为目前的方法需要立体形状监督,无法出现。在对立维表达式物体进行隐含表达式时,我们的方法仅考虑在解决每个3D地点的亮度场时最相关的对象部分的僵硬转换。这样,拟议方法代表着受成的改变,而不会大大增加计算的复杂性。NARF完全不同,可以从带有外观说明的图像中接受培训。此外,通过使用自动编码器,它可以了解一个物体类别的多种情况下的外观变化。实验表明,拟议的方法是有效的,可以概括化成新的外观。我们在https://github.com/nogu-star/NARF上为研究目的提供代码、模型和演示。我们将在https://github.nogu-surf/NARF上提供用于研究目的的代码、模型和演示文稿。