Prior work for articulated 3D shape reconstruction often relies on specialized sensors (e.g., synchronized multi-camera systems), or pre-built 3D deformable models (e.g., SMAL or SMPL). Such methods are not able to scale to diverse sets of objects in the wild. We present BANMo, a method that requires neither a specialized sensor nor a pre-defined template shape. BANMo builds high-fidelity, articulated 3D models (including shape and animatable skinning weights) from many monocular casual videos in a differentiable rendering framework. While the use of many videos provides more coverage of camera views and object articulations, they introduce significant challenges in establishing correspondence across scenes with different backgrounds, illumination conditions, etc. Our key insight is to merge three schools of thought; (1) classic deformable shape models that make use of articulated bones and blend skinning, (2) volumetric neural radiance fields (NeRFs) that are amenable to gradient-based optimization, and (3) canonical embeddings that generate correspondences between pixels and an articulated model. We introduce neural blend skinning models that allow for differentiable and invertible articulated deformations. When combined with canonical embeddings, such models allow us to establish dense correspondences across videos that can be self-supervised with cycle consistency. On real and synthetic datasets, BANMo shows higher-fidelity 3D reconstructions than prior works for humans and animals, with the ability to render realistic images from novel viewpoints and poses. Project webpage: banmo-www.github.io .
翻译:3D 形状重建的先前工作通常依赖于专门传感器(例如,同步多相机系统)或预建的3D变形模型(例如,SMAL 或 SMPL ) 。 这种方法无法在野生不同对象群中进行比例化。 我们展示了 BANMo, 这种方法既不需要专门的传感器,也不需要预设模板形状。 BANMo 构建了高纤维化、 清晰的3D模型(包括形状和可想象的剥皮重量) 在一个不同的合成框架中来自许多单眼的合成随机视频。 虽然许多视频的使用更多地覆盖了摄像头视图和对象表达能力,但它们在建立不同背景、不光化条件等场面之间的通信方面带来了重大挑战。 我们的主要洞察力是将三层集成; (1) 经典的变形模型既不需要专门的传感器,也不需要预先定义的外皮质形状形状形状形状。 (2) 体积的变色色场域(包括形状) 以及(3) 可生成像素和直立式模型之间的对等。 我们引入了彩色混合的图像模型,可以使变色变色图像在前变的图像中形成不同的自我。