We describe a threat model under which a split network-based federated learning system is susceptible to a model inversion attack by a malicious computational server. We demonstrate that the attack can be successfully performed with limited knowledge of the data distribution by the attacker. We propose a simple additive noise method to defend against model inversion, finding that the method can significantly reduce attack efficacy at an acceptable accuracy trade-off on MNIST. Furthermore, we show that NoPeekNN, an existing defensive method, protects different information from exposure, suggesting that a combined defence is necessary to fully protect private user data.
翻译:我们描述了一种威胁模式,在这个模式下,一个基于网络的分拆联合学习系统很容易受到恶意计算服务器的反向攻击模型的攻击。我们证明,攻击可以在对攻击者数据分布的了解有限的情况下成功进行。我们建议一种简单的添加噪声方法来防范反向攻击模型,发现该方法可以大大降低攻击效果,而以可接受的准确性交换MNIST。此外,我们表明,现有的防御方法NoPeekNNN可以保护不同的信息,使其不暴露于外,这表明,为了充分保护私人用户数据,有必要采用联合防御手段。