With increasing appealing to privacy issues in face recognition, federated learning has emerged as one of the most prevalent approaches to study the unconstrained face recognition problem with private decentralized data. However, conventional decentralized federated algorithm sharing whole parameters of networks among clients suffers from privacy leakage in face recognition scene. In this work, we introduce a framework, FedGC, to tackle federated learning for face recognition and guarantees higher privacy. We explore a novel idea of correcting gradients from the perspective of backward propagation and propose a softmax-based regularizer to correct gradients of class embeddings by precisely injecting a cross-client gradient term. Theoretically, we show that FedGC constitutes a valid loss function similar to standard softmax. Extensive experiments have been conducted to validate the superiority of FedGC which can match the performance of conventional centralized methods utilizing full training dataset on several popular benchmark datasets.
翻译:随着在面对面的识别中越来越多地吸引隐私问题,联合会学习已成为研究不受限制的私人分散化数据所面临的识别问题的最普遍方法之一。然而,传统的分散化联合算法共享客户网络的整个参数在面对面的识别场景中受到隐私渗漏的影响。在这项工作中,我们引入了一个框架,即FedGC,以解决联合会学习的面对面识别问题,并保障更高的隐私。我们探索了从后向传播的角度纠正梯度的新想法,并提出了一个基于软式麦克斯的正规化器,以通过精确注入跨客户梯度术语来纠正班级嵌入的梯度。理论上,我们表明FedGC是一种与标准软体相类似的有效损失函数。我们进行了广泛的实验,以验证FedGC的优势,这种优势可以与常规集中方法的绩效相匹配,在几个流行的基准数据集上建立全面的培训数据集。