The task of detecting morphed face images has become highly relevant in recent years to ensure the security of automatic verification systems based on facial images, e.g. automated border control gates. Detection methods based on Deep Neural Networks (DNN) have been shown to be very suitable to this end. However, they do not provide transparency in the decision making and it is not clear how they distinguish between genuine and morphed face images. This is particularly relevant for systems intended to assist a human operator, who should be able to understand the reasoning. In this paper, we tackle this problem and present Focused Layer-wise Relevance Propagation (FLRP). This framework explains to a human inspector on a precise pixel level, which image regions are used by a Deep Neural Network to distinguish between a genuine and a morphed face image. Additionally, we propose another framework to objectively analyze the quality of our method and compare FLRP to other DNN interpretability methods. This evaluation framework is based on removing detected artifacts and analyzing the influence of these changes on the decision of the DNN. Especially, if the DNN is uncertain in its decision or even incorrect, FLRP performs much better in highlighting visible artifacts compared to other methods.
翻译:近些年来,检测变形脸图像的任务已变得非常重要。 检测变形脸图像的任务在近些年变得非常重要,以确保基于面面图像的自动核查系统的安全,例如自动化边界控制门; 以深神经网络(DNNN)为基础的检测方法已证明非常适合这一目的; 但是,这些方法没有提供决策的透明度,也没有提供决策决策的透明度,而且不清楚它们如何区分真实和变形脸图像。 这对于旨在帮助应能够理解推理推理的人类操作员的操作员的系统尤其具有相关性。 在本文件中,我们处理这一问题,并提出基于面图像的聚焦层图象化(FLRP) 。这个框架向精确的像素水平的人类检查员解释了基于深神经网络的检测方法(DNNNW) 的图像区域,用来区分真实和变形面图像。此外,我们提议另一个框架,客观分析我们的方法质量,并将FLRP与其他DNN的可解释方法比较FLRRP与其他D的可解释性方法比较。这个评价框架的基础是清除这些变化对DNN的决定的影响。特别是,如果DNNNN在其决定中DNNN是不确定的DNNN是非或甚至不正确的,或者甚至不正确的,FLRRP将其他方法更好地突出地显示其他方法更突出地显示。 FRFRFRFRP 更突出地显示其他方法。