Federated Learning has emerged to cope with raising concerns about privacy breaches in using Machine or Deep Learning models. This new paradigm allows the leverage of deep learning models in a distributed manner, enhancing privacy preservation. However, the server's blindness to local datasets introduces its vulnerability to model poisoning attacks and data heterogeneity, tampering with the global model performance. Numerous works have proposed robust aggregation algorithms and defensive mechanisms, but the approaches are orthogonal to individual attacks or issues. FedCC, the proposed method, provides robust aggregation by comparing the Centered Kernel Alignment of Penultimate Layers Representations. The experiment results on FedCC demonstrate that it mitigates untargeted and targeted model poisoning or backdoor attacks while also being effective in non-Independently and Identically Distributed data environments. By applying FedCC against untargeted attacks, global model accuracy is recovered the most. Against targeted backdoor attacks, FedCC nullified attack confidence while preserving the test accuracy. Most of the experiment results outstand the baseline methods.
翻译:联邦学习联盟已经出现,以应对人们对使用机器或深层学习模式中侵犯隐私行为的关切。这一新模式允许以分散方式利用深层学习模式,加强隐私保护。然而,服务器对本地数据集的失明使其容易受中毒袭击模型和数据异质性的影响,从而干扰了全球模型性能。许多著作都提出了强有力的聚合算法和防御机制,但针对个别攻击或问题的方法是正统的。FedCC,即拟议方法,通过比较中央核心内尔对顶层代表的对齐,提供了强有力的汇总。FedCC的实验结果表明,它减轻了无针对性和有针对性的模型中毒或后门攻击,同时在非独立和相同分布的数据环境中也发挥了效力。通过应用FedCC来应对非目标性攻击,全球模型准确性得到了最大的恢复。针对定向后门攻击,FedCC在保持测试准确性的同时取消了攻击的信心。大多数实验结果都超过了基线方法。