The increasing deployment of artificial intelligence (AI) tools to inform decision making across diverse areas including healthcare, employment, social benefits, and government policy, presents a serious risk for disabled people, who have been shown to face bias in AI implementations. While there has been significant work on analysing and mitigating algorithmic bias, the broader mechanisms of how bias emerges in AI applications are not well understood, hampering efforts to address bias where it begins. In this article, we illustrate how bias in AI-assisted decision making can arise from a range of specific design decisions, each of which may seem self-contained and non-biasing when considered separately. These design decisions include basic problem formulation, the data chosen for analysis, the use the AI technology is put to, and operational design elements in addition to the core algorithmic design. We draw on three historical models of disability common to different decision-making settings to demonstrate how differences in the definition of disability can lead to highly distinct decisions on each of these aspects of design, leading in turn to AI technologies with a variety of biases and downstream effects. We further show that the potential harms arising from inappropriate definitions of disability in fundamental design stages are further amplified by a lack of transparency and disabled participation throughout the AI design process. Our analysis provides a framework for critically examining AI technologies in decision-making contexts and guiding the development of a design praxis for disability-related AI analytics. We put forth this article to provide key questions to facilitate disability-led design and participatory development to produce more fair and equitable AI technologies in disability-related contexts.
翻译:越来越多的人工智能(AI)工具用于指导不同领域的决策,包括医疗保健、就业、社会福利和政府政策,这些工具的部署给残疾人带来了严重风险,他们已证明在AI的实施中面临偏见。虽然在分析和减轻算法偏差方面做了大量工作,但关于AI应用中偏见如何出现这种偏差的较广泛机制没有被很好地理解,从而阻碍了解决偏见的起始点的努力。在本条中,我们说明了AI协助决策中的偏差如何产生于一系列具体的设计决定,其中每个决定在分别考虑时似乎都是自足的和无偏见的。这些设计决定包括基本问题、为分析选择的数据、AI技术的使用以及核心算法设计中的业务设计要素。我们利用三种不同决策环境中常见的残疾历史模型,以表明残疾定义的差异如何导致对设计中的每一个方面作出截然不同的决定,进而导致具有各种偏差和下游效应的AI技术。我们进一步表明,由于基本设计阶段残疾定义不当的残疾定义而可能造成的伤害。除了核心算法设计外,在AI设计过程中缺乏一个关键的残疾设计框架,我们为与残疾有关的设计提供了一种与AI有关的技术的参与。