Category-level 6D pose estimation aims to predict the poses and sizes of unseen objects from a specific category. Thanks to prior deformation, which explicitly adapts a category-specific 3D prior (i.e., a 3D template) to a given object instance, prior-based methods attained great success and have become a major research stream. However, obtaining category-specific priors requires collecting a large amount of 3D models, which is labor-consuming and often not accessible in practice. This motivates us to investigate whether priors are necessary to make prior-based methods effective. Our empirical study shows that the 3D prior itself is not the credit to the high performance. The keypoint actually is the explicit deformation process, which aligns camera and world coordinates supervised by world-space 3D models (also called canonical space). Inspired by these observation, we introduce a simple prior-free implicit space transformation network, namely IST-Net, to transform camera-space features to world-space counterparts and build correspondence between them in an implicit manner without relying on 3D priors. Besides, we design camera- and world-space enhancers to enrich the features with pose-sensitive information and geometrical constraints, respectively. Albeit simple, IST-Net becomes the first prior-free method that achieves state-of-the-art performance, with top inference speed on the REAL275 dataset. Our code and models will be publicly available.
翻译:类别级 6D 位姿估计旨在预测不同类别的未见过物体的姿态和大小。通过显式地调整类别特定的三维先前信息(即 3D 模型)以适应给定的物体实例,先前信息法取得了巨大的成功并成为一大研究流派。但是,获取类别特定的先前信息需要收集大量的 3D 模型,这是费时且实际上并不容易实现的。这促使我们探究是否必须使用先前信息来使先前信息法有效。我们的实证研究表明,高性能的关键实际上并不是三维先前信息本身,而是显式的变形过程,该过程使得相机和世界坐标通过世界空间的 3D 模型(也称为规范空间)得以对齐。这种观察引发了我们的启示,设计了一种简单但无视先前信息的隐式空间转换网络,简称 IST-Net,将相机空间的特征转换为世界空间的特征,并以隐式的方式建立它们之间的对应关系。此外,我们设计了相机空间增强器和世界空间增强器,以丰富特征并赋予姿态敏感性信息和几何约束。虽然简单,但 IST-Net 成为了第一个无需先前信息且达到最先进性能的方法,并具有顶级的推理速度,适用于 REAL275 数据集。我们的代码和模型将公开提供。