Fair representation learning provides an effective way of enforcing fairness constraints without compromising utility for downstream users. A desirable family of such fairness constraints, each requiring similar treatment for similar individuals, is known as individual fairness. In this work, we introduce the first method that enables data consumers to obtain certificates of individual fairness for existing and new data points. The key idea is to map similar individuals to close latent representations and leverage this latent proximity to certify individual fairness. That is, our method enables the data producer to learn and certify a representation where for a data point all similar individuals are at $\ell_\infty$-distance at most $\epsilon$, thus allowing data consumers to certify individual fairness by proving $\epsilon$-robustness of their classifier. Our experimental evaluation on five real-world datasets and several fairness constraints demonstrates the expressivity and scalability of our approach.
翻译:公平代表制学习是实施公平限制的有效途径,同时又不损害下游用户的效用。这种公平限制要求相似的个人得到类似的待遇,这种理想的大家庭称为个人公平。在这项工作中,我们引入了第一个方法,使数据消费者能够获得现有和新数据点的个人公平证书。关键的想法是绘制类似个人地图,以缩小潜在的代表性,利用这种潜在的接近来证明个人公平。也就是说,我们的方法使数据生产者能够学习和认证一种代表,对于数据点而言,所有类似个人都以$\ell ⁇ infty$-距离为单位,最多为$\epsilon$-robust,从而允许数据消费者通过证明其分类器的“$\epsilon$-robust”来证明个人公平。我们对五个真实世界数据集的实验性评估和若干公平性限制显示了我们方法的表达性和可扩展性。