For model privacy, local model parameters in federated learning shall be obfuscated before sent to the remote aggregator. This technique is referred to as \emph{secure aggregation}. However, secure aggregation makes model poisoning attacks such backdooring more convenient considering that existing anomaly detection methods mostly require access to plaintext local models. This paper proposes SAFELearning which supports backdoor detection for secure aggregation. We achieve this through two new primitives - \emph{oblivious random grouping (ORG)} and \emph{partial parameter disclosure (PPD)}. ORG partitions participants into one-time random subgroups with group configurations oblivious to participants; PPD allows secure partial disclosure of aggregated subgroup models for anomaly detection without leaking individual model privacy. SAFELearning can significantly reduce backdoor model accuracy without jeopardizing the main task accuracy under common backdoor strategies. Extensive experiments show SAFELearning is robust against malicious and faulty participants, whilst being more efficient than the state-of-art secure aggregation protocol in terms of both communication and computation costs.
翻译:对于模型隐私,联邦式学习的当地模型参数在送交远程聚合器之前应模糊不清。 这种技术被称为 emph{ 安全聚合} 。 但是, 安全的聚合使得典型中毒攻击更加方便, 因为现有的异常检测方法大多需要使用普通的本地模型。 本文建议 FafelLearning 支持后门检测, 以便安全聚合。 我们通过两种新的原始方法( emph{ 明显随机组( ORG)} 和\emph{ 部分参数披露( PPD)}) 实现这一目标。 ORG 分区参与者分成一次性随机分组, 组合配置不为参与者所注意; PPDD 允许在不泄露个人模型隐私的情况下, 安全部分披露异常检测的分组综合模型。 SAFEL Learning 可以大幅降低后门模型的准确性, 而不会危及常见的后门战略下的主要任务精度。 广泛的实验显示, SAFELLEearing 能够抵御恶意和错误的参与者, 同时在通信和计算成本方面比州级安全聚合协议的效率更高。