We present a scalable post-processing algorithm for debiasing trained models, including deep neural networks (DNNs), which we prove to be near-optimal by bounding its excess Bayes risk. We empirically validate its advantages on standard benchmark datasets across both classical algorithms as well as modern DNN architectures and demonstrate that it outperforms previous post-processing methods while performing on par with in-processing. In addition, we show that the proposed algorithm is particularly effective for models trained at scale where post-processing is a natural and practical choice.
翻译:我们提出了一个可缩放的后处理算法,用于降低受过训练的模型,包括深神经网络(DNNs)的偏差,我们通过将贝雅人过多的风险捆绑起来,证明这种算法是近乎最佳的。 我们从经验上验证了古典算法和现代DNN结构的标准基准数据集的优势,并证明它比以前的后处理方法都好,同时在与处理相同的情况下运作。此外,我们表明,拟议的算法对于在后处理是一种自然而实际的选择的情况下经过大规模培训的模型特别有效。