This article studies how to intervene against statistical discrimination, when it is based on beliefs generated by machine learning, rather than by humans. Unlike beliefs formed by a human mind, machine learning-generated beliefs are verifiable. This allows interventions to move beyond simple, belief-free designs like affirmative action, to more sophisticated ones, that constrain decision makers in ways that depend on what they are thinking. Such mind reading interventions can perform well where affirmative action does not, even when the beliefs being conditioned on are possibly incorrect and biased.
翻译:暂无翻译