Differentially private models seek to protect the privacy of data the model is trained on, making it an important component of model security and privacy. At the same time, data scientists and machine learning engineers seek to use uncertainty quantification methods to ensure models are as useful and actionable as possible. We explore the tension between uncertainty quantification via dropout and privacy by conducting membership inference attacks against models with and without differential privacy. We find that models with large dropout slightly increases a model's risk to succumbing to membership inference attacks in all cases including in differentially private models.
翻译:同时,数据科学家和机器学习工程师也试图使用不确定性量化方法,以确保模型尽可能有用和可操作。我们探索通过对有和无差别隐私的模型进行成员推论攻击,从而通过辍学和隐私的不确定性量化与隐私之间的紧张关系。我们发现,大量辍学的模型在包括差别式私人模型在内的各种情况下都略微增加了该模型屈从于会员推论攻击的风险。