We study the XAI (explainable AI) on the face recognition task, particularly the face verification here. Face verification is a crucial task in recent days and it has been deployed to plenty of applications, such as access control, surveillance, and automatic personal log-on for mobile devices. With the increasing amount of data, deep convolutional neural networks can achieve very high accuracy for the face verification task. Beyond exceptional performances, deep face verification models need more interpretability so that we can trust the results they generate. In this paper, we propose a novel similarity metric, called explainable cosine ($xCos$), that comes with a learnable module that can be plugged into most of the verification models to provide meaningful explanations. With the help of $xCos$, we can see which parts of the two input faces are similar, where the model pays its attention to, and how the local similarities are weighted to form the output $xCos$ score. We demonstrate the effectiveness of our proposed method on LFW and various competitive benchmarks, resulting in not only providing novel and desiring model interpretability for face verification but also ensuring the accuracy as plugging into existing face recognition models.
翻译:我们研究了面部识别任务上的 XAI (可解释的 AI) 。 面部核查是近几天来的一项重要任务。 面部核查是一个至关重要的任务, 并且已经被应用到大量应用中, 如出入控制、 监视和移动设备自动个人登录等。 随着数据数量的增加, 深层进化神经网络可以实现面部核查任务非常高的准确性。 除了特殊表现外, 深层面部核查模型需要更多的解释性, 以便我们能够相信它们产生的结果 。 在本文中, 我们提出了一个新的相似性指标, 叫做可解释的 Cosine ($xCoses$ ), 配上一个可学习的模块, 可以插入大多数核查模型, 以提供有意义的解释。 在 $xCos 的帮助下, 我们可以看到两种输入面部的面部面部面部与模型相似, 以及本地的相似性是如何被加权形成 $xCos 分的。 我们展示了我们提议的LFW 方法和各种竞争基准的有效性, 不仅提供了新颖且淡化的模型可解释性, 也确保了对面部核查的精确性模型的精确性, 插入了现有表面识别模型。