Privacy auditing techniques for differentially private (DP) algorithms are useful for estimating the privacy loss to compare against analytical bounds, or empirically measure privacy in settings where known analytical bounds on the DP loss are not tight. However, existing privacy auditing techniques usually make strong assumptions on the adversary (e.g., knowledge of intermediate model iterates or the training data distribution), are tailored to specific tasks and model architectures, and require retraining the model many times (typically on the order of thousands). These shortcomings make deploying such techniques at scale difficult in practice, especially in federated settings where model training can take days or weeks. In this work, we present a novel "one-shot" approach that can systematically address these challenges, allowing efficient auditing or estimation of the privacy loss of a model during the same, single training run used to fit model parameters. Our privacy auditing method for federated learning does not require a priori knowledge about the model architecture or task. We show that our method provides provably correct estimates for privacy loss under the Gaussian mechanism, and we demonstrate its performance on a well-established FL benchmark dataset under several adversarial models.
翻译:用于不同私人(DP)算法的隐私审计技术有助于估计隐私损失,以便与分析界限进行比较,或者在已知的DP损失分析界限不紧密的情况下对隐私进行实证测量;然而,现有的隐私审计技术通常对对手作出强有力的假设(例如,对中间模型的迭代或培训数据分布的了解),这些技术针对具体的任务和模型结构,需要多次对模型进行再培训(通常按千人顺序排列)。这些缺陷使得在实践中很难在规模上部署此类技术,特别是在联邦化环境中,示范培训需要数天或数周时间。在这项工作中,我们提出了一个新型的“一拍”方法,可以系统地应对这些挑战,允许在同一单一的培训运行期间对模型的隐私损失进行高效审计或估计,以适合模型参数。我们的联邦化学习的隐私审计方法不需要事先了解模型结构或任务。我们表明,我们的方法为Gaussian机制下的隐私损失提供了非常准确的估计数,我们展示了它在几个敌对模型下公认的FL基准数据集上的表现。