We present MultiBodySync, a novel, end-to-end trainable multi-body motion segmentation and rigid registration framework for multiple input 3D point clouds. The two non-trivial challenges posed by this multi-scan multibody setting that we investigate are: (i) guaranteeing correspondence and segmentation consistency across multiple input point clouds capturing different spatial arrangements of bodies or body parts; and (ii) obtaining robust motion-based rigid body segmentation applicable to novel object categories. We propose an approach to address these issues that incorporates spectral synchronization into an iterative deep declarative network, so as to simultaneously recover consistent correspondences as well as motion segmentation. At the same time, by explicitly disentangling the correspondence and motion segmentation estimation modules, we achieve strong generalizability across different object categories. Our extensive evaluations demonstrate that our method is effective on various datasets ranging from rigid parts in articulated objects to individually moving objects in a 3D scene, be it single-view or full point clouds.
翻译:我们为多个输入 3D 点云提出了一个新颖的、端到端可训练的多体运动分解和僵硬登记框架,我们调查的多扫描多体设置带来的两个非三角挑战是:(一) 保证多个输入点云之间的通信和分解一致性,捕捉不同身体或身体部分的空间安排;(二) 获得适用于新物体类别的以运动为基础的强固体分解。我们提议了一种处理这些问题的方法,将光谱同步纳入一个迭代深度声明性网络,以便同时恢复一致的通信和运动分解。同时,通过明确断开通信和运动分解估计模块,我们在不同对象类别之间实现了强烈的共通性。我们的广泛评价表明,我们的方法对各种数据集是有效的,从竖立物体的僵硬部分到在3D 场的单个移动物体,无论是单一视图还是完整点云层。