We investigate fairness in classification, where automated decisions are made for individuals from different protected groups. In high-consequence scenarios, decision errors can disproportionately affect certain protected groups, leading to unfair outcomes. To address this issue, we propose a fairness-adjusted selective inference (FASI) framework and develop data-driven algorithms that achieve statistical parity by controlling and equalizing the false selection rate (FSR) among protected groups. Our FASI algorithm operates by converting the outputs of black-box classifiers into R-values, which are both intuitive and computationally efficient. The selection rules based on R-values, which effectively mitigate disparate impacts on protected groups, are provably valid for FSR control in finite samples. We demonstrate the numerical performance of our approach through both simulated and real data.
翻译:暂无翻译