Recently, face recognition systems have demonstrated remarkable performances and thus gained a vital role in our daily life. They already surpass human face verification accountability in many scenarios. However, they lack explanations for their predictions. Compared to human operators, typical face recognition network system generate only binary decisions without further explanation and insights into those decisions. This work focuses on explanations for face recognition systems, vital for developers and operators. First, we introduce a confidence score for those systems based on facial feature distances between two input images and the distribution of distances across a dataset. Secondly, we establish a novel visualization approach to obtain more meaningful predictions from a face recognition system, which maps the distance deviation based on a systematic occlusion of images. The result is blended with the original images and highlights similar and dissimilar facial regions. Lastly, we calculate confidence scores and explanation maps for several state-of-the-art face verification datasets and release the results on a web platform. We optimize the platform for a user-friendly interaction and hope to further improve the understanding of machine learning decisions. The source code is available on GitHub, and the web platform is publicly available at http://explainable-face-verification.ey.r.appspot.com.
翻译:最近,面部识别系统展现了引人注目的性能,从而在日常生活中取得了关键作用。它们在许多情景中已经超越了人的面部核查问责制。然而,它们缺乏对其预测的解释。与人类操作者相比,典型的面部识别网络系统只产生二进制决定,而没有进一步的解释和洞察这些决定。这项工作侧重于对面部识别系统的解释,对开发者和操作者至关重要。首先,我们根据两种输入图像之间的面部特征距离和数据集之间的距离分布,对这些系统进行信任评分。第二,我们建立了一种新的可视化方法,从一个面部识别系统中获取更有意义的预测,该系统根据图像的系统封闭性绘制了距离偏差图示。结果与原始图像相混合,突出相似和不同面部区域。最后,我们计算了一些最先进的面部核查数据集的信任分数和解释图,并在一个网络平台上公布结果。我们优化了用户友好互动平台的平台,希望进一步提高对机器学习决定的理解。源代码可在GitHub上查阅,网站平台可以公开查阅。