In cross-device federated learning (FL) setting, clients such as mobiles cooperate with the server to train a global machine learning model, while maintaining their data locally. However, recent work shows that client's private information can still be disclosed to an adversary who just eavesdrops the messages exchanged between the client and the server. For example, the adversary can infer whether the client owns a specific data instance, which is called a passive membership inference attack. In this paper, we propose a new passive inference attack that requires much less computation power and memory than existing methods. Our empirical results show that our attack achieves a higher accuracy on CIFAR100 dataset (more than $4$ percentage points) with three orders of magnitude less memory space and five orders of magnitude less calculations.
翻译:在跨设备联合学习(FL)设置中,移动等客户与服务器合作,培训全球机器学习模式,同时在当地维护数据。然而,最近的工作表明,客户的私人信息仍然可以向仅仅窃听客户与服务器之间交换的信息的对手披露。例如,对手可以推断客户是否拥有一个特定的数据实例,即所谓的被动会籍推论攻击。在本文中,我们提议一种新的被动推论攻击,要求的计算力和记忆力比现有方法少得多。我们的经验结果表明,我们的攻击使CIFAR100数据集(超过4美元百分点)的精确度更高,而记忆空间少3个数量级,计算少5个数量级。