Why should we trust the detections of deep neural networks for manipulated faces? Understanding the reasons is important for users in improving the fairness, reliability, privacy and trust of the detection models. In this work, we propose an interpretable face manipulation detection approach to achieve the trustworthy and accurate inference. The approach could make the face manipulation detection process transparent by embedding the feature whitening module. This module aims to whiten the internal working mechanism of deep networks through feature decorrelation and feature constraint. The experimental results demonstrate that our proposed approach can strike a balance between the detection accuracy and the model interpretability.
翻译:我们为什么要相信对被操纵面孔的深层神经网络的探测? 理解这些原因对于用户提高检测模型的公平性、可靠性、隐私性和信任性十分重要。 在这项工作中,我们提出一种可解释的面部操纵检测方法,以实现可信和准确的推理。 这种方法可以通过嵌入特征白化模块,使面部操纵检测过程透明。 这个模块的目的是通过特征装饰和特征制约,使深层网络的内部工作机制白化。 实验结果表明,我们提出的方法可以在检测准确性和模型解释性之间取得平衡。