We introduce a new family of techniques to post-process ("wrap") a black-box classifier in order to reduce its bias. Our technique builds on the recent analysis of improper loss functions whose optimization can correct any twist in prediction, unfairness being treated as a twist. In the post-processing, we learn a wrapper function which we define as an $\alpha$-tree, which modifies the prediction. We provide two generic boosting algorithms to learn $\alpha$-trees. We show that our modification has appealing properties in terms of composition of $\alpha$-trees, generalization, interpretability, and KL divergence between modified and original predictions. We exemplify the use of our technique in three fairness notions: conditional value-at-risk, equality of opportunity, and statistical parity; and provide experiments on several readily available datasets.
翻译:为了减少偏向性,我们引入了一个黑盒分类器(“包装”)后工艺的新技术系列,以缩小其偏差。我们的技术基于最近对不当损失功能的分析,这种功能的优化可以纠正预测中的任何曲折,不公正被当作一个曲折处理。在后处理中,我们学习了一种包装功能,我们把它定义为$\alpha$-tree,这改变了预测。我们提供了两种通用的促进算法,以学习$\alpha$-tree。我们表明,我们的修改在美元-alpha$-tree的组成、通用化、可解释性和KL值的改变与原始预测之间的差异方面具有吸引力。我们用三个公平概念来说明我们技术的使用:有条件的值风险、机会平等和统计均等;以及就若干现成的数据集提供实验。