Federated Learning (FL), a distributed machine learning paradigm, has been adapted to mitigate privacy concerns for customers. Despite their appeal, there are various inference attacks that can exploit shared-plaintext model updates to embed traces of customer private information, leading to serious privacy concerns. To alleviate this privacy issue, cryptographic techniques such as Secure Multi-Party Computation and Homomorphic Encryption have been used for privacy-preserving FL. However, such security issues in privacy-preserving FL are poorly elucidated and underexplored. This work is the first attempt to elucidate the triviality of performing model corruption attacks on privacy-preserving FL based on lightweight secret sharing. We consider scenarios in which model updates are quantized to reduce communication overhead in this case, where an adversary can simply provide local parameters outside the legal range to corrupt the model. We then propose the MUD-PQFed protocol, which can precisely detect malicious clients performing attacks and enforce fair penalties. By removing the contributions of detected malicious clients, the global model utility is preserved to be comparable to the baseline global model without the attack. Extensive experiments validate effectiveness in maintaining baseline accuracy and detecting malicious clients in a fine-grained manner
翻译:联邦学习联合会(FL)是一个分散的机器学习模式,已经进行了调整,以缓解客户对隐私的关切。尽管客户的吸引力很大,但各种推论攻击可以利用共享平台模型更新来利用共享平台模型更新来嵌入客户私人信息的痕迹,从而导致严重的隐私问题。为了缓解这一隐私问题,使用保密多党安全计算和单向加密等加密技术来保护隐私。然而,隐私保护平台中的这类安全问题没有很好地阐明,也没有得到充分探讨。这项工作是首次尝试阐明在轻度秘密共享的基础上对隐私保护平台进行腐败示范袭击的无关紧要之处。我们考虑了对模型更新进行量化以降低此案件通信间接费用的情景,在这种情况下,对手只能提供法律范围以外的地方参数来腐蚀模型。我们随后提出了MUD-PQFed协议,它能够准确地检测到实施袭击的恶意客户,并且实施公平的处罚。通过消除被检测到的恶意客户的贡献,全球模型效用得以保存,在不受攻击的情况下可以与基线全球模型相比。进行广泛的实验,以精确的方式验证基线和检测恶意客户的有效性。