Brought into the public discourse through investigative work by journalists and scholars, awareness of algorithmic harms is at an all-time high. An increasing amount of research has been conducted under the banner of enhancing responsible artificial intelligence (AI), with the goal of addressing, alleviating, and eventually mitigating the harms brought on by the roll out of algorithmic systems. Nonetheless, implementation of such tools remains low. Given this gap, this paper offers a modest proposal: that the field, particularly researchers concerned with responsible research and innovation, may stand to gain from supporting and prioritising more ethnographic work. This embedded work can flesh out implementation frictions and reveal organisational and institutional norms that existing work on responsible artificial intelligence AI has not yet been able to offer. In turn, this can contribute to more insights about the anticipation of risks and mitigation of harm. This paper reviews similar empirical work typically found elsewhere, commonly in science and technology studies and safety science research, and lays out challenges of this form of inquiry.
翻译:通过记者和学者的调查工作,公众讨论时,对算法伤害的认识达到了历史最高点,在加强负责任的人工智能(AI)的旗号下,开展了越来越多的研究,目的是处理、减轻并最终减轻算法系统推出后带来的伤害,然而,这些工具的落实程度仍然很低。鉴于这一差距,本文件提出了一个微小的建议:该领域,特别是负责研究和创新的研究人员,可能从支持和优先考虑更多的人种学工作中受益。这一嵌入的工作可以充实执行上的摩擦,并揭示出目前关于负责任的人工智能AI的工作尚未能够提供的组织和制度规范。反过来,这又有助于进一步了解对风险的预期和减轻伤害的情况。本文件回顾了通常在其他地方发现的类似经验工作,通常在科学和技术研究和安全科学研究中发现,并提出了这种形式调查的挑战。