We study fairness in classification, where one wishes to make automated decisions for people from different protected groups. When individuals are classified, the decision errors can be unfairly concentrated in certain protected groups. We develop a fairness-adjusted selective inference (FASI) framework and data-driven algorithms that achieve statistical parity in the sense that the false selection rate (FSR) is controlled and equalized among protected groups. The FASI algorithm operates by converting the outputs from black-box classifiers to R-values, which are intuitively appealing and easy to compute. Selection rules based on R-values are provably valid for FSR control, and avoid disparate impacts on protected groups. The effectiveness of FASI is demonstrated through both simulated and real data.
翻译:我们研究分类的公平性,人们希望为来自不同受保护群体的人作出自动决定。当个人被分类时,决策错误可能不公平地集中在某些受保护群体中。我们开发了公平调整选择性推断框架和数据驱动算法,实现统计均等,即虚假选择率(FSR)在受保护群体中受到控制并平等。FASI算法通过将黑盒分类器的产出转换为R值来运作,后者直觉地具有吸引力并易于计算。基于R值的选择规则对于FSR的控制是可行的,避免对受保护群体产生不同的影响。FASI的有效性通过模拟数据和真实数据得到证明。