What counts as legitimate AI ethics labor, and consequently, what are the epistemic terms on which AI ethics claims are rendered legitimate? Based on 75 interviews with technologists including researchers, developers, open source contributors, artists, and activists, this paper explores various epistemic bases from which AI ethics is practiced. In the context of outside attacks on AI ethics as an impediment to "progress," I show how some AI ethics practices have reached toward scholarly authority, automation and quantification and achieved some legitimacy, while those based on richly embodied and situated lived experience have not. This paper draws the works of feminist Anthropology and Science and Technology Studies (STS) scholars Diana Forsythe and Lucy Suchman together with the works of postcolonial feminist theorist Sara Ahmed and Black feminist theorist Kristie Dotson to examine the implications of dominant AI ethics practices. I argue that by entrenching the epistemic power of quantification, dominant AI ethics practices risk legitimizing AI ethics as a project in equal and opposite measure to the extent that they delegitimize and marginalize embodied and lived experiences as legitimate parts of the same project. In response, I propose and sketch the idea of humble technical practices: quantified or technical practices which specifically seek to make their epistemic limits clear, with a view to flattening hierarchies of epistemic power.
翻译:暂无翻译