Responsible AI guidelines often ask engineers to consider how their systems might harm. However, contemporary AI systems are built by composing many preexisting software modules that pass through many hands before becoming a finished product or service. How does this shape responsible AI practice? In interviews with 27 AI engineers across industry, open source, and academia, our participants often did not see the questions posed in responsible AI guidelines to be within their agency, capability, or responsibility to address. We use Lucy Suchman's notion of located accountability to show how responsible AI labor is currently organized, and to explore how it could be done differently. We identify cross-cutting social logics, like modularizability, scale, reputation, and customer orientation, that organize which responsible AI actions do take place, and which are relegated to low status staff or believed to be the work of the next or previous person in the chain. We argue that current responsible AI interventions, like ethics checklists and guidelines that assume panoptical knowledge and control over systems, could improve by taking a located accountability approach, where relations and obligations intertwine and incrementally add value in the process. This would constitute a shift from "supply chain' thinking to "value chain" thinking.
翻译:负责任的大赦国际准则常常要求工程师考虑他们的系统会如何受到损害。然而,当代大赦国际系统是通过将许多在成品或服务之前就已经存在的软件模块通过许多人手而形成的。这如何塑造负责任的大赦国际做法?在与行业、开放源头和学术界的27名AI工程师的访谈中,我们的参与者往往不认为负责任的大赦国际准则中提出的问题属于其机构、能力或责任范围。我们使用露西·苏曼的定位问责制概念来表明目前如何组织负责任的大赦国际工作,并探索如何以不同的方式做到这一点。我们确定了交叉的社会逻辑,如模块化、规模、声誉和客户导向,这些逻辑组织哪些负责任的大赦国际行动确实发生,哪些被降为低级工作人员,或者被认为是链中下一个或前个人的工作。我们说,目前的负责任的大赦国际干预措施,如道德清单和准则,假定了光学知识和系统控制,可以通过采用定位的问责制方法来改进,其中的关系和义务相互交织和递增增加过程的价值。这将构成从“供应链思维”转向“价值链价值链思维”的转变。