Traditional approaches to ensure group fairness in algorithmic decision making aim to equalize ``total'' error rates for different subgroups in the population. In contrast, we argue that the fairness approaches should instead focus only on equalizing errors arising due to model uncertainty (a.k.a epistemic uncertainty), caused due to lack of knowledge about the best model or due to lack of data. In other words, our proposal calls for ignoring the errors that occur due to uncertainty inherent in the data, i.e., aleatoric uncertainty. We draw a connection between predictive multiplicity and model uncertainty and argue that the techniques from predictive multiplicity could be used to identify errors made due to model uncertainty. We propose scalable convex proxies to come up with classifiers that exhibit predictive multiplicity and empirically show that our methods are comparable in performance and up to four orders of magnitude faster than the current state-of-the-art. We further propose methods to achieve our goal of equalizing group error rates arising due to model uncertainty in algorithmic decision making and demonstrate the effectiveness of these methods using synthetic and real-world datasets.
翻译:相比之下,我们主张,公平办法应只注重因模型不确定性(a.k.k.aceptestestemmillis)引起的差错的均衡化,因为模型不确定性(a.k.aceptestistic minidence)是因对最佳模型缺乏了解或缺乏数据而造成的。换句话说,我们的提案要求忽略由于数据固有的不确定性(即偏向不确定性)而出现的差错。我们把预测的多重性和模型不确定性联系起来,并主张预测的多重性技术可用于查明由于模型不确定性造成的差错。我们提出可变的二次曲线半轴技术,与显示预测性多重的分类师一起提出,实验性地表明,我们的方法在性能上是可比较的,比目前的状况快到四级。我们进一步提出方法,以实现由于模型在算法决策中的不确定性而产生的群体差错率相等的目标,并用合成和真实世界数据集展示这些方法的有效性。