Fairwashing refers to the risk that an unfair black-box model can be explained by a fairer model through post-hoc explanation manipulation. In this paper, we investigate the capability of fairwashing attacks by analyzing their fidelity-unfairness trade-offs. In particular, we show that fairwashed explanation models can generalize beyond the suing group (i.e., data points that are being explained), meaning that a fairwashed explainer can be used to rationalize subsequent unfair decisions of a black-box model. We also demonstrate that fairwashing attacks can transfer across black-box models, meaning that other black-box models can perform fairwashing without explicitly using their predictions. This generalization and transferability of fairwashing attacks imply that their detection will be difficult in practice. Finally, we propose an approach to quantify the risk of fairwashing, which is based on the computation of the range of the unfairness of high-fidelity explainers.
翻译:清洗黑匣子是指不公平黑盒模型有可能通过一个更公平的模型通过施压后的解释操纵来解释不公平黑盒模型。 在本文中,我们通过分析其真实性与不公平性之间的取舍来调查公平洗涤攻击的能力。特别是,我们表明,公平洗涤的解释模型可以超越起诉群体(即正在解释的数据点),即,公平洗涤的解释器可以用来使随后的黑盒模型不公平的决定合理化。我们还表明,公平洗涤攻击可以跨越黑盒模型,意味着其他黑盒模型可以在不明确地使用预测的情况下进行公平的洗涤。这种公平洗涤攻击的概括性和可转移性意味着在实践中很难发现这些攻击。最后,我们提出一种量化公平洗涤风险的方法,其依据是计算高不洁性解释者不公的范围。