Presentation attack detection (PAD) is a critical component in secure face authentication. We present a PAD algorithm to distinguish face spoofs generated by a photograph of a subject from live images. Our method uses an image decomposition network to extract albedo and normal. The domain gap between the real and spoof face images leads to easily identifiable differences, especially between the recovered albedo maps. We enhance this domain gap by retraining existing methods using supervised contrastive loss. We present empirical and theoretical analysis that demonstrates that contrast and lighting effects can play a significant role in PAD; these show up, particularly in the recovered albedo. Finally, we demonstrate that by combining all of these methods we achieve state-of-the-art results on both intra-dataset testing for CelebA-Spoof, OULU, CASIA-SURF datasets and inter-dataset setting on SiW, CASIA-MFSD, Replay-Attack and MSU-MFSD datasets.
翻译:演示攻击探测( PAD) 是安全面部验证的关键组成部分 。 我们展示了一种 PAD 算法, 以区分从一个对象的照片中产生的面孔与现场图像。 我们的方法使用图像分解网络来提取反照和正常。 真实图像和表面图像之间的域差导致容易辨别的差异, 特别是已回收的反照图之间的域差。 我们通过使用监督对比损失的现有方法再培训, 加强了这一域间差距 。 我们提出经验和理论分析, 表明对比和照明效应在 PAD 中可以发挥重要作用 ; 这些都显示了出来, 特别是在回收的反照反照中。 最后, 我们通过将所有这些方法结合起来, 我们通过在CelebA-Spoof、ULU、 CASIA- SURF 数据集和SIW、 CASIA-MFSD、 Replay-Attack 和 MSU- MFSDD数据集的内部数据集设置上实现最先进的结果 。