Unsupervised domain adaptation has been widely adopted to generalize models for unlabeled data in a target domain, given labeled data in a source domain, whose data distributions differ from the target domain. However, existing works are inapplicable to face recognition under privacy constraints because they require sharing sensitive face images between two domains. To address this problem, we propose a novel unsupervised federated face recognition approach (FedFR). FedFR improves the performance in the target domain by iteratively aggregating knowledge from the source domain through federated learning. It protects data privacy by transferring models instead of raw data between domains. Besides, we propose a new domain constraint loss (DCL) to regularize source domain training. DCL suppresses the data volume dominance of the source domain. We also enhance a hierarchical clustering algorithm to predict pseudo labels for the unlabeled target domain accurately. To this end, FedFR forms an end-to-end training pipeline: (1) pre-train in the source domain; (2) predict pseudo labels by clustering in the target domain; (3) conduct domain-constrained federated learning across two domains. Extensive experiments and analysis on two newly constructed benchmarks demonstrate the effectiveness of FedFR. It outperforms the baseline and classic methods in the target domain by over 4% on the more realistic benchmark. We believe that FedFR will shed light on applying federated learning to more computer vision tasks under privacy constraints.
翻译:不受监督的域适应被广泛采用,以在目标域中推广未贴标签数据模型,在源域中推广未贴标签数据模型,在源域中提供标签数据,其数据分布与目标域不同。然而,现有工作在隐私限制下无法面对承认,因为它们要求在两个域间共享敏感面貌图像。为解决这一问题,我们建议采用新的未经监督的联邦面貌识别方法(FedFR)。FedFFR通过在源域中反复收集知识,通过联合学习,从源域中迭接汇总知识,改进目标域内的工作表现。它通过在两个域间传输模型而不是原始数据来保护数据隐私。此外,我们提议对源域培训进行新的域限制损失(DCL),以规范源域内培训。DCL压制源域的数据数量主导地位。我们还加强等级组合算法,以准确预测未贴标签目标域域域内的假标签。为此,FedFFR将组成一个端对端对端对端培训管道:(1) 在源域域域中进行预先整合;(2) 通过在目标域域中进行分组,通过在目标域域域内传输校正学习,在两个域域基准下,对FDFDFFFFFD进行更多的基准进行更多的学习。