Current monocular-based 6D object pose estimation methods generally achieve less competitive results than RGBD-based methods, mostly due to the lack of 3D information. To make up this gap, this paper proposes a 3D geometric volume based pose estimation method with a short baseline two-view setting. By constructing a geometric volume in the 3D space, we combine the features from two adjacent images to the same 3D space. Then a network is trained to learn the distribution of the position of object keypoints in the volume, and a robust soft RANSAC solver is deployed to solve the pose in closed form. To balance accuracy and cost, we propose a coarse-to-fine framework to improve the performance in an iterative way. The experiments show that our method outperforms state-of-the-art monocular-based methods, and is robust in different objects and scenes, especially in serious occlusion situations.
翻译:目前以单体为基础的 6D 对象构成的估算方法通常比以 RGBD 为基础的方法取得较少竞争性的结果,这主要是由于缺少3D 信息。为了弥补这一差距,本文件建议了基于3D 几何体积的估算方法,并有一个短基线双视图设置。通过在 3D 空间建造一个几何体积,我们将两个相邻图像的特征合并到同一个 3D 空间。然后,一个网络接受培训,以了解体积中物体关键点的分布,并部署一个强大的软型RANSAC 求解器来以封闭形式解答该方形。为了平衡准确性和成本,我们提议了一个以迭接方式改进性能的共价比框架。实验表明,我们的方法优于最先进的单体型方法,并且在不同的物体和场景区,特别是在严重隐蔽的情况下,非常坚固。