Recovering full 3D shapes from partial observations is a challenging task that has been extensively addressed in the computer vision community. Many deep learning methods tackle this problem by training 3D shape generation networks to learn a prior over the full 3D shapes. In this training regime, the methods expect the inputs to be in a fixed canonical form, without which they fail to learn a valid prior over the 3D shapes. We propose SCARP, a model that performs Shape Completion in ARbitrary Poses. Given a partial pointcloud of an object, SCARP learns a disentangled feature representation of pose and shape by relying on rotationally equivariant pose features and geometric shape features trained using a multi-tasking objective. Unlike existing methods that depend on an external canonicalization, SCARP performs canonicalization, pose estimation, and shape completion in a single network, improving the performance by 45% over the existing baselines. In this work, we use SCARP for improving grasp proposals on tabletop objects. By completing partial tabletop objects directly in their observed poses, SCARP enables a SOTA grasp proposal network improve their proposals by 71.2% on partial shapes. Project page: https://bipashasen.github.io/scarp
翻译:从部分观察中回收完整的 3D 形状是一项艰巨的任务,在计算机视觉界已经广泛处理过。许多深层次的学习方法通过培训 3D 形状生成网络,在全 3D 形状上学习先先学3D 形状,来解决这个问题。在这个培训制度中,这些方法期望投入以固定的圆柱形形式进行,没有这种方式,它们就无法在3D 形状上学习有效。我们建议SCARP,一个模型,在Aribror Poses中完成形状的完成。鉴于一个物体的局部点,SCARP通过依靠旋转式的静态成形和用多功能目标训练的几何形状特征来学习形状的分解特征。与现有方法不同,SCARP使用外形化、显示估计和在单一网络中完成,比现有基线提高45%的性能。在这项工作中,我们使用SCARP来改进桌面天体上的配置。通过直接在所观察到的形状中完成部分桌面物体,SASARP能够使SOTA Appime pass pass pass page page page 。