Applying machine learning (ML) to sensitive domains requires privacy protection of the underlying training data through formal privacy frameworks, such as differential privacy (DP). Yet, usually, the privacy of the training data comes at the cost of the resulting ML models' utility. One reason for this is that DP uses one uniform privacy budget epsilon for all training data points, which has to align with the strictest privacy requirement encountered among all data holders. In practice, different data holders have different privacy requirements and data points of data holders with lower requirements can contribute more information to the training process of the ML models. To account for this need, we propose two novel methods based on the Private Aggregation of Teacher Ensembles (PATE) framework to support the training of ML models with individualized privacy guarantees. We formally describe the methods, provide a theoretical analysis of their privacy bounds, and experimentally evaluate their effect on the final model's utility using the MNIST, SVHN, and Adult income datasets. Our empirical results show that the individualized privacy methods yield ML models of higher accuracy than the non-individualized baseline. Thereby, we improve the privacy-utility trade-off in scenarios in which different data holders consent to contribute their sensitive data at different individual privacy levels.
翻译:在敏感领域应用机器学习(ML)要求通过正式的隐私框架(如差异隐私(DP))对基本培训数据进行隐私保护。然而,通常,培训数据的隐私是以由此产生的 ML 模型的效用为代价的。原因之一是,DP对所有培训数据点都使用统一的隐私预算单,这必须与所有数据持有者遇到的最严格的隐私要求相一致。在实践中,不同数据持有者有不同的隐私要求,而数据持有者的数据点要求较低,可为ML模式的培训过程提供更多的信息。为了满足这一需要,我们提议了两种基于PATE的私人教师集体(PATE)框架的新方法,以支持对ML模型的培训,提供个性化隐私保障。我们正式描述这些方法,提供对其隐私界限的理论分析,并用MNIST、SVHN和成人收入数据集实验性评估其对最后模式效用的影响。我们的经验结果表明,个人化隐私方法的模型的准确性模型比非个体化基线的精确度要高。为此,我们提出了两种新方法,即基于PATE(PATE)框架的新方法,以支持对ML)模式的培训模式的培训,并带有个性隐私保障隐私保障的模型。我们正式描述方法,从而改进了个人在敏感度数据中的不同数据持有者对不同层次上的数据。