We introduce a new family of techniques to post-process ("wrap") a black-box classifier in order to reduce its bias. Our technique builds on the recent analysis of improper loss functions whose optimisation can correct any twist in prediction, unfairness being treated as a twist. In the post-processing, we learn a wrapper function which we define as an {\alpha}-tree, which modifies the prediction. We provide two generic boosting algorithms to learn {\alpha}-trees. We show that our modification has appealing properties in terms of composition of{\alpha}-trees, generalization, interpretability, and KL divergence between modified and original predictions. We exemplify the use of our technique in three fairness notions: conditional value at risk, equality of opportunity, and statistical parity; and provide experiments on several readily available datasets.
翻译:为了减少偏向性,我们引入了一个黑盒分类器(“包装”)后处理技术的新系列。我们的技术建立在最近对不当损失功能的分析的基础上,这些功能的优化可以纠正预测的任何扭曲,不公正被当作一个曲折。在后处理中,我们学习了一种我们定义为“alpha}树”的包装功能,它改变了预测。我们提供了两种通用推算法来学习“alpha}树”。我们表明,我们的修改在“thalpha}树”的构成、一般化、可解释性以及“KL”在修改后预测和原始预测之间的差异方面具有吸引人的特性。我们用三个公平概念来说明我们技术的使用情况:有条件的风险价值、机会平等和统计平等;以及就若干现成的数据集提供实验。