Current domain adaptation methods for face anti-spoofing leverage labeled source domain data and unlabeled target domain data to obtain a promising generalizable decision boundary. However, it is usually difficult for these methods to achieve a perfect domain-invariant liveness feature disentanglement, which may degrade the final classification performance by domain differences in illumination, face category, spoof type, etc. In this work, we tackle cross-scenario face anti-spoofing by proposing a novel domain adaptation method called cyclically disentangled feature translation network (CDFTN). Specifically, CDFTN generates pseudo-labeled samples that possess: 1) source domain-invariant liveness features and 2) target domain-specific content features, which are disentangled through domain adversarial training. A robust classifier is trained based on the synthetic pseudo-labeled images under the supervision of source domain labels. We further extend CDFTN for multi-target domain adaptation by leveraging data from more unlabeled target domains. Extensive experiments on several public datasets demonstrate that our proposed approach significantly outperforms the state of the art.
翻译:在这项工作中,我们通过提出一种称为周期分解特征翻译网络(CDFTN)的新版域适应方法,解决了反浮控的标签源域数据和未贴标签的目标域数据,以获得一个有希望的通用决定边界。然而,这些方法通常很难实现完美的域-异性活性特征分解,这可能会通过光化、面型、型号等的域差异降低最后分类性能。在源域标签监管下,根据合成假标签图像培训一个强大的分类器。我们通过利用来自更无标签目标域的数据,进一步扩展多目标域适应的CDFTN。关于若干公共数据集的广泛实验表明,我们拟议的方法大大超越了艺术状态。