In this work, we theoretically study the impact of differential privacy on fairness in binary classification. We prove that, given a class of models, popular group fairness measures are pointwise Lipschitz-continuous with respect to the parameters of the model. This result is a consequence of a more general statement on the probability that a decision function makes a negative prediction conditioned on an arbitrary event (such as membership to a sensitive group), which may be of independent interest. We use the aforementioned Lipschitz property to prove a high probability bound showing that, given enough examples, the fairness level of private models is close to the one of their non-private counterparts.
翻译:在这项工作中,我们从理论上研究了差异隐私对二元分类公平性的影响,我们证明,根据一组模型,大众群体公平措施在模型参数方面是明智的,Lipschitz是连续的,其结果是更笼统地说明,决策功能对任意事件(如属于敏感群体)作出负面预测的可能性,而这种事件可能是独立感兴趣的,我们利用上述Lipschitz财产证明,如果有足够的例子,私人模型的公平性接近非私人模型,那么这种可能性很大。