This paper explores previously unknown backdoor risks in HyperNet-based personalized federated learning (HyperNetFL) through poisoning attacks. Based upon that, we propose a novel model transferring attack (called HNTROJ), i.e., the first of its kind, to transfer a local backdoor infected model to all legitimate and personalized local models, which are generated by the HyperNetFL model, through consistent and effective malicious local gradients computed across all compromised clients in the whole training process. As a result, HNTROJ reduces the number of compromised clients needed to successfully launch the attack without any observable signs of sudden shifts or degradation regarding model utility on legitimate data samples making our attack stealthy. To defend against HNTROJ, we adapted several backdoor-resistant FL training algorithms into HyperNetFL. An extensive experiment that is carried out using several benchmark datasets shows that HNTROJ significantly outperforms data poisoning and model replacement attacks and bypasses robust training algorithms.
翻译:本文探讨了HyperNet个人化联合学习(HyperNetFLL)中之前未知的中毒袭击风险。在此基础上,我们提出了一个新型的转移攻击模式(称为HNTROJ),即第一种类型,将当地的后门感染模式转移到所有合法和个性化的地方模式,这些模式是由HyperNetFLF模式产生的,在整个培训过程中,所有受损害客户都计算出一致和有效的恶意本地梯度。因此,HNTROJ减少了在合法数据样本的模型功能方面成功发动攻击所需的妥协客户数量,而没有观察到任何突然转移或退化的迹象。为了防范HNTROJ,我们改编了几套防后门FL培训算法,将其纳入超文本网络。使用几个基准数据集进行的一项广泛实验表明,HNTROJ明显地超脱入数据中毒和模型替换攻击,绕过强有力的培训算法。