In this paper, we propose a method for coarse camera pose computation which is robust to viewing conditions and does not require a detailed model of the scene. This method meets the growing need of easy deployment of robotics or augmented reality applications in any environments, especially those for which no accurate 3D model nor huge amount of ground truth data are available. It exploits the ability of deep learning techniques to reliably detect objects regardless of viewing conditions. Previous works have also shown that abstracting the geometry of a scene of objects by an ellipsoid cloud allows to compute the camera pose accurately enough for various application needs. Though promising, these approaches use the ellipses fitted to the detection bounding boxes as an approximation of the imaged objects. In this paper, we go one step further and propose a learning-based method which detects improved elliptic approximations of objects which are coherent with the 3D ellipsoid in terms of perspective projection. Experiments prove that the accuracy of the computed pose significantly increases thanks to our method and is more robust to the variability of the boundaries of the detection boxes. This is achieved with very little effort in terms of training data acquisition -- a few hundred calibrated images of which only three need manual object annotation. Code and models are released at https://github.com/zinsmatt/3D-Aware-Ellipses-for-Visual-Localization.
翻译:在本文中,我们建议一种粗糙的相机形状计算方法,该方法对于观察条件来说是稳健的,不需要详细的场景模型。这种方法满足了在任何环境中,特别是在没有准确的三维模型和大量地面真相数据的环境中,易于部署机器人或扩大现实应用的日益需要,特别是没有准确的三维模型和大量地面真相数据的环境。它利用深深层学习技术的能力可靠地探测物体,而不论观察条件如何。以前的工程还表明,用阴极云抽取物体场景的几何方法能够精确地计算相机,足以满足各种应用需要。虽然这些方法很有希望,但使用与探测盒装合的埃利普,作为图像对象的近似。在本文件中,我们进一步提出一个基于学习的方法,以探测与3D光谱投影相一致的物体的精度近度。实验证明,由于我们的方法,计算结果的精确度将大大提高,而且对于探测箱的边界的变异性来说也比较有力。在培训数据采集的D-不甚小的模型方面,这个方法是少量的努力。