The use of counterfactuals for considerations of algorithmic fairness and explainability is gaining prominence within the machine learning community and industry. This paper argues for more caution with the use of counterfactuals when the facts to be considered are social categories such as race or gender. We review a broad body of papers from philosophy and social sciences on social ontology and the semantics of counterfactuals, and we conclude that the counterfactual approach in machine learning fairness and social explainability can require an incoherent theory of what social categories are. Our findings suggest that most often the social categories may not admit counterfactual manipulation, and hence may not appropriately satisfy the demands for evaluating the truth or falsity of counterfactuals. This is important because the widespread use of counterfactuals in machine learning can lead to misleading results when applied in high-stakes domains. Accordingly, we argue that even though counterfactuals play an essential part in some causal inferences, their use for questions of algorithmic fairness and social explanations can create more problems than they resolve. Our positive result is a set of tenets about using counterfactuals for fairness and explanations in machine learning.
翻译:在机器学习界和行业中,利用反事实来考虑算法公正和解释性的问题日益受到重视。本文认为,当所考虑的事实属于种族或性别等社会类别时,使用反事实的做法应更加谨慎。我们审查了哲学和社会科学关于社会本体学和反事实语义的大量论文,我们的结论是,机器学习公平和社会解释性的反事实方法可能需要对什么是社会类别有一个不一致的理论。我们的研究结果表明,社会类别往往不会接受反事实操纵,因此可能不能适当满足评估反事实真相或虚假的要求。这很重要,因为在机器学习中广泛使用反事实可能会在高取域应用时导致误导结果。因此,我们认为,即使反事实在某些因果关系推论中起着关键作用,但用它们来提出算法公正和社会解释的问题却会产生比它们解决的问题更多的问题。我们的积极结果是,在机器学习中使用反事实来公平解释的一套教义。