Graph matching (GM) has been a building block in various areas including computer vision and pattern recognition. Despite recent impressive progress, existing deep GM methods often have obvious difficulty in handling outliers, which are ubiquitous in practice. We propose a deep reinforcement learning based approach RGM, whose sequential node matching scheme naturally fits the strategy for selective inlier matching against outliers. A revocable action framework is devised to improve the agent's flexibility against the complex constrained GM. Moreover, we propose a quadratic approximation technique to regularize the affinity score, in the presence of outliers. As such, the agent can finish inlier matching timely when the affinity score stops growing, for which otherwise an additional parameter i.e. the number of inliers is needed to avoid matching outliers. In this paper, we focus on learning the back-end solver under the most general form of GM: the Lawler's QAP, whose input is the affinity matrix. Especially, our approach can also boost existing GM methods that use such input. Experiments on multiple real-world datasets demonstrate its performance regarding both accuracy and robustness.
翻译:图表匹配(GM)一直是包括计算机视觉和模式识别在内的各个领域的一个基石。尽管最近取得了令人印象深刻的进展,但现有的深层次的GM方法往往在处理离子方面有明显的困难,而这些离子在实际中是普遍存在的。我们建议采用一种基于深度强化学习的RGM方法,其相继节点匹配方案自然符合选择性离子匹配与离子匹配的战略。设计了一个可撤销的行动框架,以提高代理人对复杂的受制约的GM的灵活性。此外,我们提出了一种二次近距离接近技术,以便在离子面前使亲近得分正规化。因此,当亲近得分停止增长时,该代理人可以完成不切的匹配。对于后者,我们建议采用另一种额外的参数,即离子数量是为了避免匹配离子。在本文中,我们侧重于在GM的最一般形式下学习后端解答器:Lawler的QAP,其投入是亲近矩阵。我们的方法还可以促进现有的GM方法,使用这种输入。在多个真实世界数据集上进行实验表明其准确性和稳健性。</s>