For model privacy, local model parameters in federated learning shall be obfuscated before sent to the remote aggregator. This technique is referred to as \emph{secure aggregation}. However, secure aggregation makes model poisoning attacks, e.g., to insert backdoors, more convenient given existing anomaly detection methods mostly require access to plaintext local models. This paper proposes SAFELearning which supports backdoor detection for secure aggregation. We achieve this through two new primitives - \emph{oblivious random grouping (ORG)} and \emph{partial parameter disclosure (PPD)}. ORG partitions participants into one-time random subgroups with group configurations oblivious to participants; PPD allows secure partial disclosure of aggregated subgroup models for anomaly detection without leaking individual model privacy. SAFELearning is able to significantly reduce backdoor model accuracy without jeopardizing the main task accuracy under common backdoor strategies. Extensive experiments show SAFELearning reduces backdoor accuracy from $100\%$ to $8.2\%$ for ResNet-18 over CIFAR-10 when $10\%$ participants are malicious.
翻译:对于模型隐私,联邦学习的当地模型参数在发送给远程聚合器之前应模糊不清。 这种技术被称为 emph{ 安全集合} 。 但是, 安全的聚合使得中毒袭击模式成为了典型的随机分组, 例如插入后门, 由于现有的异常检测方法更方便, 多数需要使用普通的本地模型。 本文建议, 支持后门检测以安全聚合的 FSAEL 学习。 我们通过两种新的原始( emph{ 明显随机分组( ORG)} 和\ emph{ 部分参数披露( PPD)}) 实现这一点。 ORG 分区将参与者分成一次性随机分组, 组合配置不为参与者所知; PPPD 允许在不泄露个人模型隐私的情况下, 安全地部分披露异常检测的分组模型。 SAFEL 能够大幅降低后门模型的准确度, 而不会危及常见的后门战略下的主要任务精确度。 广泛的实验显示, SAFEL 学习将后门精确度从100 美元降至8.2 美元, 当参与者为10美元时ResNet-18 超过CIFAR- 10美元时, 10 10 。