Federated learning (FL) was originally regarded as a framework for collaborative learning among clients with data privacy protection through a coordinating server. In this paper, we propose a new active membership inference (AMI) attack carried out by a dishonest server in FL. In AMI attacks, the server crafts and embeds malicious parameters into global models to effectively infer whether a target data sample is included in a client's private training data or not. By exploiting the correlation among data features through a non-linear decision boundary, AMI attacks with a certified guarantee of success can achieve severely high success rates under rigorous local differential privacy (LDP) protection; thereby exposing clients' training data to significant privacy risk. Theoretical and experimental results on several benchmark datasets show that adding sufficient privacy-preserving noise to prevent our attack would significantly damage FL's model utility.
翻译:联邦学习(FL)最初被视为通过协调服务器在拥有数据隐私保护的客户之间开展协作学习的框架,在本文中,我们提议由FL不诚实服务器进行新的主动成员推断(AMI)攻击。在AMI攻击中,服务器手工艺和恶意参数嵌入全球模型,以有效推断目标数据样本是否包含在客户的私人培训数据中。通过非线性决定边界利用数据特征的相互关系,经核证成功保证的AMI攻击可在严格的当地差异隐私权保护下实现高成功率,从而使客户的培训数据暴露在重大隐私风险之下。 几个基准数据集的理论和实验结果表明,增加足够的隐私保护噪音以防止我们的攻击将严重损害FL的模式效用。</s>