Local feature matching is essential for many applications, such as localization and 3D reconstruction. However, it is challenging to match feature points accurately in various camera viewpoints and illumination conditions. In this paper, we propose a framework that robustly extracts and describes salient local features regardless of changing light and viewpoints. The framework suppresses illumination variations and encourages structural information to ignore the noise from light and to focus on edges. We classify the elements in the feature covariance matrix, an implicit feature map information, into two components. Our model extracts feature points from salient regions leading to reduced incorrect matches. In our experiments, the proposed method achieved higher accuracy than the state-of-the-art methods in the public dataset, such as HPatches, Aachen Day-Night, and ETH, which especially show highly variant viewpoints and illumination.
翻译:本地特性匹配对于许多应用程序至关重要, 如本地化和 3D 重建 。 但是, 在各种相机视图和照明条件下, 准确匹配特征点具有挑战性 。 在本文件中, 我们提议了一个框架, 以强力提取和描述显著的本地特征, 而不考虑光和观点的变化 。 框架抑制了照明变异, 鼓励结构信息忽略光线的噪音, 并关注边缘 。 我们将特征共变矩阵中的元素( 隐含特征地图信息) 分为两个组成部分 。 我们的模型从显著区域提取特征点, 导致不正确的匹配减少 。 在我们的实验中, 提议的方法比公共数据集中的最新方法( 如HPatcheches、 Aachen Day- Night 和 Eth) 更加精确, 特别是显示高度变异视图和亮度 。