We present PICCOLO, a simple and efficient algorithm for omnidirectional localization. Given a colored point cloud and a 360 panorama image of a scene, our objective is to recover the camera pose at which the panorama image is taken. Our pipeline works in an off-the-shelf manner with a single image given as a query and does not require any training of neural networks or collecting ground-truth poses of images. Instead, we match each point cloud color to the holistic view of the panorama image with gradient-descent optimization to find the camera pose. Our loss function, called sampling loss, is point cloud-centric, evaluated at the projected location of every point in the point cloud. In contrast, conventional photometric loss is image-centric, comparing colors at each pixel location. With a simple change in the compared entities, sampling loss effectively overcomes the severe visual distortion of omnidirectional images, and enjoys the global context of the 360 view to handle challenging scenarios for visual localization. PICCOLO outperforms existing omnidirectional localization algorithms in both accuracy and stability when evaluated in various environments.
翻译:我们展示了光向定位的简单而高效的算法PICCOLO。 根据彩色点云和场景的360全景图像, 我们的目标是恢复拍摄全景图像的摄像头。 我们的管道工作以现成方式以现成方式进行, 将单一图像作为查询, 不需要对神经网络进行任何培训或收集图像的地面真象。 相反, 我们把每个点云色颜色与全景图像的整体视图匹配, 并优化梯度以发现相机的构成。 我们的损失功能, 称为取样损失, 是点云中心, 以每个点云的预测位置为中心。 相反, 常规光度测量损失以图像为中心, 比较每个像素位置的颜色。 在比较实体的简单变化下, 取样损失有效地克服了对全景图像的严重视觉扭曲, 并享受360视图的全球环境背景, 以便处理具有挑战性的视觉定位情景。 PICCOLO 超越了当前在各种环境中评估时准确性和稳定性的全景点定位算法。