Considerable research effort has been guided towards algorithmic fairness but real-world adoption of bias reduction techniques is still scarce. Existing methods are either metric- or model-specific, require access to sensitive attributes at inference time, or carry high development or deployment costs. This work explores the unfairness that emerges when optimizing ML models solely for predictive performance, and how to mitigate it with a simple and easily deployed intervention: fairness-aware hyperparameter optimization (HO). We propose and evaluate fairness-aware variants of three popular HO algorithms: Fair Random Search, Fair TPE, and Fairband. We validate our approach on a real-world bank account opening fraud case-study, as well as on three datasets from the fairness literature. Results show that, without extra training cost, it is feasible to find models with 111% mean fairness increase and just 6% decrease in performance when compared with fairness-blind HO.
翻译:已经引导了相当的研究工作以实现算法公平,但现实世界仍然很少采用减少偏见的技术。 现有的方法要么是量制的,要么是模型的,要求在推论时间获得敏感属性,要么是具有很高的开发或部署成本。 这项工作探索了在优化仅用于预测性能的 ML 模型时出现的不公平,以及如何通过简单和容易部署的干预来减轻这种不公平:公平认识超分量优化(HO ) 。 我们提出并评价三种受欢迎的 HOM 算法的公平认知变式:公平随机搜索、公平TEP和公平带。 我们验证了我们在真实世界银行开立欺诈案研究以及公平文献的三个数据集上的做法。 结果显示,如果没有额外的培训成本,就有可能找到111%的模型,与公平盲目的 HO相比,其性能仅下降6%。