Prior work on 6-DoF object pose estimation has largely focused on instance-level processing, in which a textured CAD model is available for each object being detected. Category-level 6-DoF pose estimation represents an important step toward developing robotic vision systems that operate in unstructured, real-world scenarios. In this work, we propose a single-stage, keypoint-based approach for category-level object pose estimation that operates on unknown object instances within a known category using a single RGB image as input. The proposed network performs 2D object detection, detects 2D keypoints, estimates 6-DoF pose, and regresses relative bounding cuboid dimensions. These quantities are estimated in a sequential fashion, leveraging the recent idea of convGRU for propagating information from easier tasks to those that are more difficult. We favor simplicity in our design choices: generic cuboid vertex coordinates, single-stage network, and monocular RGB input. We conduct extensive experiments on the challenging Objectron benchmark, outperforming state-of-the-art methods on the 3D IoU metric (27.6% higher than the MobilePose single-stage approach and 7.1% higher than the related two-stage approach).
翻译:对6-DoF 对象的先前估计主要侧重于实例级处理,其中每个被检测对象都有一个纹理的 CAD 模型。类别级 6-DoF 的估算是朝着开发在无结构、现实世界情景下运作的机器人视觉系统迈出的重要一步。在这项工作中,我们建议对类别级物体采用单一阶段的、基于关键点的估算方法,该方法在已知的类别内对未知的物体进行操作,使用单一的 RGB 图像作为输入。拟议网络对具有挑战性的天体基准进行2D 检测、检测2D 关键点、估计 6-DoF 构成和反向相对约束幼虫尺寸。这些数量是按顺序估算的,利用最新的 convGRU 理念将信息从较容易的任务传播到较困难的任务传播到较困难的任务。我们赞成我们的设计选择简单性:通用的幼鸟脊椎坐标、单级网络和单级RGB 输入。我们在3D IOU 高的IoU 度指标(27.6%) 和两级比移动Pose- 相关步骤7.1 方法高得多)。