Recent advances in differentially private deep learning have demonstrated that application of differential privacy, specifically the DP-SGD algorithm, has a disparate impact on different sub-groups in the population, which leads to a significantly high drop-in model utility for sub-populations that are under-represented (minorities), compared to well-represented ones. In this work, we aim to compare PATE, another mechanism for training deep learning models using differential privacy, with DP-SGD in terms of fairness. We show that PATE does have a disparate impact too, however, it is much less severe than DP-SGD. We draw insights from this observation on what might be promising directions in achieving better fairness-privacy trade-offs.
翻译:私人深层学习的最近不同进展表明,应用差异隐私权,特别是DP-SGD算法,对人口的不同亚群体产生不同的影响,导致与代表性强的亚群体相比,对代表性不足的亚群体(少数群体)的降入模式效用显著高。在这项工作中,我们的目标是将使用差异隐私权的深层学习模式的另一个培训机制PATE与DP-SGD在公平方面的公平性加以比较。我们表明,PATE的确具有不同的影响,但是,它远不如DP-SGD。 我们从这一观察中了解到,在实现更好的公平-特权取舍方面,哪些方向是大有希望的。