This paper proposes a new image-based localization framework that explicitly localizes the camera/robot by fusing Convolutional Neural Network (CNN) and sequential images' geometric constraints. The camera is localized using a single or few observed images and training images with 6-degree-of-freedom pose labels. A Siamese network structure is adopted to train an image descriptor network, and the visually similar candidate image in the training set is retrieved to localize the testing image geometrically. Meanwhile, a probabilistic motion model predicts the pose based on a constant velocity assumption. The two estimated poses are finally fused using their uncertainties to yield an accurate pose prediction. This method leverages the geometric uncertainty and is applicable in indoor scenarios predominated by diffuse illumination. Experiments on simulation and real data sets demonstrate the efficiency of our proposed method. The results further show that combining the CNN-based framework with geometric constraint achieves better accuracy when compared with CNN-only methods, especially when the training data size is small.
翻译:本文提出一个新的基于图像的本地化框架, 明确将相机/ 机器人定位为本地化, 具体方法是引信 Convolutional神经网络( CNN) 和相继图像的几何限制。 相机使用一个或几个观测到的图像和培训图像, 并配有6度自由制的标签。 采用一个暹罗网络结构来培训图像描述网络, 并检索成套培训中视觉相似的候选人图像, 以便将测试图像定位为本地化 。 同时, 一个概率运动模型根据恒定速度假设预测了图像。 两种估计的构成最终结合了它们不确定性, 以得出准确的外观预测。 这种方法利用了几何不确定性, 并适用于室内情景, 以扩散的无光化为主。 模拟实验和真实数据集展示了我们拟议方法的效率 。 结果进一步显示, 将CNN框架与几何限制相结合, 与CNN的唯一方法相比, 更加精确, 特别是当培训数据规模小时 。