Algorithmic fairness has received increased attention in socially sensitive domains. While rich literature on mean fairness has been established, research on quantile fairness remains sparse but vital. To fulfill great needs and advocate the significance of quantile fairness, we propose a novel framework to learn a real-valued quantile function under the fairness requirement of Demographic Parity with respect to sensitive attributes, such as race or gender, and thereby derive a reliable fair prediction interval. Using optimal transport and functional synchronization techniques, we establish theoretical guarantees of distribution-free coverage and exact fairness for the induced prediction interval constructed by fair quantiles. A hands-on pipeline is provided to incorporate flexible quantile regressions with an efficient fairness adjustment post-processing algorithm. We demonstrate the superior empirical performance of this approach on several benchmark datasets. Our results show the model's ability to uncover the mechanism underlying the fairness-accuracy trade-off in a wide range of societal and medical applications.
翻译:在社会敏感领域,公平性得到了越来越多的关注。虽然关于平均公平性的大量文献已经确立,但关于量化公平性的研究仍然很少,但至关重要。为了满足巨大的需要和倡导量化公平的重要性,我们提议了一个新框架,在人口均等的公平要求下,根据种族或性别等敏感属性的公平性要求,学习真正的量化功能,从而得出可靠的公平预测间隔。我们利用最佳交通和功能同步技术,为公平量化构建的诱导预测间隔建立无分配覆盖的理论保障和准确的公平性。我们提供人工管道,将灵活的量化回归与高效的公平性调整后处理算法结合起来。我们在几个基准数据集中展示了这一方法的优异经验性表现。我们的结果显示模型有能力在广泛的社会和医疗应用中发现公平性交易机制的基础。