Federated Learning (FL) is emerging as a promising paradigm of privacy-preserving machine learning, which trains an algorithm across multiple clients without exchanging their data samples. Recent works highlighted several privacy and robustness weaknesses in FL and addressed these concerns using local differential privacy (LDP) and some well-studied methods used in conventional ML, separately. However, it is still not clear how LDP affects adversarial robustness in FL. To fill this gap, this work attempts to develop a comprehensive understanding of the effects of LDP on adversarial robustness in FL. Clarifying the interplay is significant since this is the first step towards a principled design of private and robust FL systems. We certify that local differential privacy has both positive and negative effects on adversarial robustness using theoretical analysis and empirical verification.
翻译:联邦学习联盟(FL)正在成为一个充满希望的维护隐私机器学习模式,这种学习模式在不交换数据样本的情况下对多个客户进行了算法培训。最近的工作突出了FL的一些隐私和稳健性弱点,并分别利用当地差异隐私和传统ML所使用的一些研究周密的方法解决了这些关切问题。然而,LDP如何影响FL的对抗性强健性尚不清楚。为填补这一空白,这项工作试图全面了解LDP对FL的对抗性强健性的影响。澄清这些相互作用很重要,因为这是朝着原则设计私营和强有力的FL系统迈出的第一步。我们证明,利用理论分析和经验核查,地方差异性隐私对对抗性强健性既有正面影响,也有消极影响。