Face anti-spoofing (FAS) approaches based on unsupervised domain adaption (UDA) have drawn growing attention due to promising performances for target scenarios. Most existing UDA FAS methods typically fit the trained models to the target domain via aligning the distribution of semantic high-level features. However, insufficient supervision of unlabeled target domains and neglect of low-level feature alignment degrade the performances of existing methods. To address these issues, we propose a novel perspective of UDA FAS that directly fits the target data to the models, i.e., stylizes the target data to the source-domain style via image translation, and further feeds the stylized data into the well-trained source model for classification. The proposed Generative Domain Adaptation (GDA) framework combines two carefully designed consistency constraints: 1) Inter-domain neural statistic consistency guides the generator in narrowing the inter-domain gap. 2) Dual-level semantic consistency ensures the semantic quality of stylized images. Besides, we propose intra-domain spectrum mixup to further expand target data distributions to ensure generalization and reduce the intra-domain gap. Extensive experiments and visualizations demonstrate the effectiveness of our method against the state-of-the-art methods.
翻译:以不受监督的域适应(UDA)为基础的面部反粪便(FAS)方法引起了越来越多的关注,因为目标情景有希望的表现。大多数现有的UDA FAS方法通常通过调和语义高层次特征的分布,将经过训练的模型与目标域相匹配。然而,对未加标记的目标域的监督不够,忽视低级特征调整,削弱了现有方法的性能。为解决这些问题,我们提出了UDA FAS的新视角,该视角直接符合模型的目标数据,即通过图像翻译将目标数据与源域样式同步化,并进一步将系统化数据纳入经过良好训练的来源分类模式。拟议的General Domeal Adism(GDA)框架将两个经过仔细设计的一致性限制结合起来:(1) 内部神经统计一致性指导发电机缩小内部差距。(2) 双级语义一致性确保Stylizizized图像的语义质量。此外,我们提议通过图像转换将目标频谱混在一起,以进一步扩大目标数据传播方式,以显示我们内部的视野化方法。