Designing machine learning algorithms that are accurate yet fair, not discriminating based on any sensitive attribute, is of paramount importance for society to accept AI for critical applications. In this article, we propose a novel fair representation learning method termed the R\'enyi Fair Information Bottleneck Method (RFIB) which incorporates constraints for utility, fairness, and compactness of representation, and apply it to image classification. A key attribute of our approach is that we consider - in contrast to most prior work - both demographic parity and equalized odds as fairness constraints, allowing for a more nuanced satisfaction of both criteria. Leveraging a variational approach, we show that our objectives yield a loss function involving classical Information Bottleneck (IB) measures and establish an upper bound in terms of the R\'enyi divergence of order $\alpha$ on the mutual information IB term measuring compactness between the input and its encoded embedding. Experimenting on three different image datasets (EyePACS, CelebA, and FairFace), we study the influence of the $\alpha$ parameter as well as two other tunable IB parameters on achieving utility/fairness trade-off goals, and show that the $\alpha$ parameter gives an additional degree of freedom that can be used to control the compactness of the representation. We evaluate the performance of our method using various utility, fairness, and compound utility/fairness metrics, showing that RFIB outperforms current state-of-the-art approaches.
翻译:设计准确而公平的机器学习算法,而不是基于任何敏感属性的歧视,对于社会接受关键应用的AI至关重要。在本篇文章中,我们提出一种新的公平代表性学习方法,即R\'enyi Fair Information Bottleneck 方法(RFIB),该方法包含对通用性、公平性和代表性的紧凑性的限制,并适用于图像分类。我们方法的一个关键属性是,我们考虑(与大多数以往工作相比)人口均等和均等性差作为公平性限制,允许更细化地满足这两项标准。我们采用变式方法,表明我们的目标产生了一种涉及传统信息博特内克(IB)措施的公平性损失功能,并确立了一种在相互信息IB术语中“美元”的“eny enyi difference” 差异的上限,衡量投入及其编码嵌入的紧紧性。我们用三个不同的图像数据集(EyePACS、CSeebA和Fair Face Facece)进行实验,我们研究了$alphia 参数的影响,同时研究美元和美元当前公平性IB标准的公平性指标的公平性参数,用以显示我们所使用的公平性指标-rvialalalal-ladealalalalal 度,用以显示我们使用的公平性标准的公平性指标性指标性,可以显示我们使用的公平性,我们使用的公平性效率-rvial-real-real-real-real-vial-vial-vialality vial a a a ex ex ex ex ex exalvialvial ex ex ex exmentality ex ex ex lamental ex lamental ex exalal ex ex exal ex exal exal ex exal ex ex exal ex ex exal exalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalalal ex ex ex ex ex ex ex ex exalalalalalalalalalalalalalalalalalalalalalalalalalalalal