Camera pose estimation in known scenes is a 3D geometry task recently tackled by multiple learning algorithms. Many regress precise geometric quantities, like poses or 3D points, from an input image. This either fails to generalize to new viewpoints or ties the model parameters to a specific scene. In this paper, we go Back to the Feature: we argue that deep networks should focus on learning robust and invariant visual features, while the geometric estimation should be left to principled algorithms. We introduce PixLoc, a scene-agnostic neural network that estimates an accurate 6-DoF pose from an image and a 3D model. Our approach is based on the direct alignment of multiscale deep features, casting camera localization as metric learning. PixLoc learns strong data priors by end-to-end training from pixels to pose and exhibits exceptional generalization to new scenes by separating model parameters and scene geometry. The system can localize in large environments given coarse pose priors but also improve the accuracy of sparse feature matching by jointly refining keypoints and poses with little overhead. The code will be publicly available at https://github.com/cvg/pixloc.
翻译:在已知的场景中, 相机显示在已知场景中是一个 3D 的几何任务, 最近由多个学习算法处理 。 许多从输入图像中回归精确的几何数量, 比如 配置点或 3D 点, 。 这既不能向新视角概括, 也不能将模型参数连接到特定场景 。 在本文中, 我们回到了功能特征 : 我们主张深网络应该侧重于学习强健和无差异的视觉特征, 而几何估计应该留给有原则的算法 。 我们引入了 PixLoc, 一个现场- 神经网络, 从图像和 3D 模型中估算准确的 6- DoF 。 我们的方法是以多尺度深度特征直接对齐为基础, 将相机本地化作为指标学习。 PixLoc 在从像素到终端的培训中学习强的数据, 以便通过分离模型参数和场景的几何测量法来向新场景展示超常态的概观。 系统可以在大环境中进行本地化,, 因为先前的状态是粗糙的, 但是也会通过联合精化关键点和小顶部配置来提高稀异特性的精准的精准性。 代码将在 http http http:// pli/ glubus/ 。