What counts as legitimate AI ethics labor, and consequently, what are the epistemic terms on which AI ethics claims are rendered legitimate? Based on 75 interviews with technologists including researchers, developers, open source contributors, and activists, this paper explores the various epistemic bases from which AI ethics is discussed and practiced. In the context of outside attacks on AI ethics as an impediment to ``progress,'' I show how some AI ethics practices have reached toward authority from automation and quantification, and achieved some legitimacy as a result, while those based on richly embodied and situated lived experience have not. This paper draws together the work of feminist Anthropology and Science and Technology Studies scholars Diana Forsythe and Lucy Suchman with the works of postcolonial feminist theorist Sara Ahmed and Black feminist theorist Kristie Dotson to examine the implications of dominant AI ethics practices. By entrenching the epistemic power of quantification, dominant AI ethics practices -- Model Cards and similar interventions -- risk legitimizing AI ethics as a project in equal and opposite measure to which they delegitimize and marginalize embodied and lived experiences as legitimate parts of the same project. In response, I propose\textit{ humble technical practices}: quantified or technical practices which specifically seek to make their epistemic limits clear in order to flatten hierarchies of epistemic power.
翻译:暂无翻译