Face Anti-spoofing (FAS) is a challenging problem due to complex serving scenarios and diverse face presentation attack patterns. Especially when captured images are low-resolution, blurry, and coming from different domains, the performance of FAS will degrade significantly. The existing multi-modal FAS datasets rarely pay attention to the cross-domain problems under deployment scenarios, which is not conducive to the study of model performance. To solve these problems, we explore the fine-grained differences between multi-modal cameras and construct a cross-domain multi-modal FAS dataset under surveillance scenarios called GREAT-FASD-S. Besides, we propose an Attention based Face Anti-spoofing network with Feature Augment (AFA) to solve the FAS towards low-quality face images. It consists of the depthwise separable attention module (DAM) and the multi-modal based feature augment module (MFAM). Our model can achieve state-of-the-art performance on the CASIA-SURF dataset and our proposed GREAT-FASD-S dataset.
翻译:由于复杂的服务情景和不同的面部演示攻击模式,面部反渗透(FAS)是一个具有挑战性的问题。尤其是当所拍摄的图像分辨率低、模糊且来自不同领域时,FAS的性能将显著下降。现有的多式FAS数据集很少关注部署情景下的跨领域问题,这不利于模型性能研究。为了解决这些问题,我们探索了多式相机之间的细微差异,并在称为Greg-FASD-S的监视情景下构建了一个跨部多式多式FAS数据集。此外,我们提议建立一个基于关注的面部反渗透网络,使用地貌放大(AFAA)来解决FAS的低质面部图像。它包括深度的分解关注模块(DAM)和基于多模式的地貌增强模块(MFAM)。我们的模型可以在CASIA-SURF数据集和我们提议的大-FSAD-S数据集实现最新状态性能。