Graph matching (GM) has been a building block in many areas including computer vision and pattern recognition. Despite the recent impressive progress, existing deep GM methods often have difficulty in handling outliers in both graphs, which are ubiquitous in practice. We propose a deep reinforcement learning (RL) based approach RGM for weighted graph matching, whose sequential node matching scheme naturally fits with the strategy for selective inlier matching against outliers, and supports seed graph matching. A revocable action scheme is devised to improve the agent's flexibility against the complex constrained matching task. Moreover, we propose a quadratic approximation technique to regularize the affinity matrix, in the presence of outliers. As such, the RL agent can finish inlier matching timely when the objective score stop growing, for which otherwise an additional hyperparameter i.e. the number of common inliers is needed to avoid matching outliers. In this paper, we are focused on learning the back-end solver for the most general form of GM: the Lawler's QAP, whose input is the affinity matrix. Our approach can also boost other solvers using the affinity input. Experimental results on both synthetic and real-world datasets showcase its superior performance regarding both matching accuracy and robustness.
翻译:图形匹配( GM) 在许多领域( 包括计算机视野和模式识别) 是一个基石。 尽管最近取得了令人印象深刻的进展, 现有的深层次 GM 方法往往难以在两种图表中处理离子, 而这些图在实际中都是无处不在的。 我们提议为加权图形匹配采用基于 RGM 的深度强化学习( RL) 方法, 其顺序节点匹配方案自然符合选择离子对外部线匹配的战略, 并且支持种子图匹配。 设计了一个可撤销的行动方案, 以提高代理人在复杂的限制匹配任务中的灵活性。 此外, 我们提议了一种四面形近似技术, 以便在离子面前对亲关系矩阵进行正规化管理。 因此, RL 代理可以在目标得分增长时及时完成不匹配, 否则会增加一个超参数, 也就是说, 要避免匹配离子匹配, 普通的内线需要多少个共同内线来学习最一般形式的 GMM: 劳勒 QAP 的后端解决方案, 其投入是亲近性矩阵。 我们的方法还可以提高其他的精确度数据, 。