Federated Learning (FL) is a setting for training machine learning models in distributed environments where the clients do not share their raw data but instead send model updates to a server. However, model updates can be subject to attacks and leak private information. Differential Privacy (DP) is a leading mitigation strategy which involves adding noise to clipped model updates, trading off performance for strong theoretical privacy guarantees. Previous work has shown that the threat model of DP is conservative and that the obtained guarantees may be vacuous or may not directly translate to information leakage in practice. In this paper, we aim to achieve a tighter measurement of the model exposure by considering a realistic threat model. We propose a novel method, CANIFE, that uses canaries - carefully crafted samples by a strong adversary to evaluate the empirical privacy of a training round. We apply this attack to vision models trained on CIFAR-10 and CelebA and to language models trained on Sent140 and Shakespeare. In particular, in realistic FL scenarios, we demonstrate that the empirical epsilon obtained with CANIFE is 2-7x lower than the theoretical bound.
翻译:联邦学习组织(FL)是在客户不分享原始数据,而是向服务器发送模型更新的分布式环境中培训机器学习模型的设置。但是,模型更新可能会受到攻击和泄露私人信息。差异隐私(DP)是一个主要的缓解战略,包括在剪辑的模型更新中增加噪音,交换业绩以获得有力的理论隐私保障。以前的工作表明,DP的威胁模式是保守的,获得的保障可能是空洞的,或不会直接转化为实际中的信息渗漏。在本文中,我们的目标是通过考虑现实的威胁模式,更严格地衡量模型暴露情况。我们提出了一个新颖的方法,即CANIFE,使用由强敌精心制作的花瓶样本来评估一轮培训的经验隐私。我们把这次攻击应用于在CIFAR-10和CelebA上培训的视觉模型,以及Sent140和莎士比的语言模型。我们特别在现实的FL情景中证明,与CANIFE获得的经验中的Epsilon比理论约束低2-7x。