Recapturing and rebroadcasting of images are common attack methods in insurance frauds and face identification spoofing, and an increasing number of detection techniques were introduced to handle this problem. However, most of them ignored the domain generalization scenario and scale variances, with an inferior performance on domain shift situations, and normally were exacerbated by intra-domain and inter-domain scale variances. In this paper, we propose a scale alignment domain generalization framework (SADG) to address these challenges. First, an adversarial domain discriminator is exploited to minimize the discrepancies of image representation distributions among different domains. Meanwhile, we exploit triplet loss as a local constraint to achieve a clearer decision boundary. Moreover, a scale alignment loss is introduced as a global relationship regularization to force the image representations of the same class across different scales to be undistinguishable. Experimental results on four databases and comparison with state-of-the-art approaches show that better performance can be achieved using our framework.
翻译:图像的重现和转播是保险欺诈和面部识别的常见攻击方法,为解决这一问题采用了越来越多的探测技术;然而,其中多数忽视了领域一般化假设和规模差异,在域变情况中表现较差,通常因域内和域间规模差异而加剧;在本文件中,我们提议一个规模调整域一般化框架来应对这些挑战。首先,利用一个对立域区分器来尽量减少不同领域之间图像分布的差异。与此同时,我们利用三重损失作为局部制约因素,以达到更明确的决定界限。此外,还引入了规模调整损失作为全球关系规范,以迫使不同尺度的同一等级的图像呈现不可分化。四个数据库的实验结果和与最新方法的比较表明,利用我们的框架可以实现更好的业绩。