Conventional algorithmic fairness is West-centric, as seen in its sub-groups, values, and methods. In this paper, we de-center algorithmic fairness and analyse AI power in India. Based on 36 qualitative interviews and a discourse analysis of algorithmic deployments in India, we find that several assumptions of algorithmic fairness are challenged. We find that in India, data is not always reliable due to socio-economic factors, ML makers appear to follow double standards, and AI evokes unquestioning aspiration. We contend that localising model fairness alone can be window dressing in India, where the distance between models and oppressed communities is large. Instead, we re-imagine algorithmic fairness in India and provide a roadmap to re-contextualise data and models, empower oppressed communities, and enable Fair-ML ecosystems.
翻译:常规算法公平以西方为中心,这体现在其分组、价值和方法上。在本文中,我们去中心算法公平并分析印度的AI权力。根据36次定性访谈和对印度的算法部署的谈话分析,我们发现,对算法公平的一些假设存在挑战。我们发现,在印度,由于社会经济因素,数据并不总是可靠的,ML制造者似乎遵循双重标准,而AI则引出了不容置疑的愿望。 我们争论说,仅仅本地化模型公平性在印度可以只是装饰窗户,而模型和受压迫社区之间的距离很大。 相反,我们在印度重新想象算法公平性,并提供路线图,以重新塑造数据和模型,赋予受压迫社区权力,并促成公平流动生态系统。