NeurIPS 2020 requested that research paper submissions include impact statements on 'potential nefarious uses and the consequences of failure.' When researching, designing, and implementing systems, a key challenge to anticipating risks, however, is to overcome what Clarke (1962) called 'failures of imagination.' The growing research on bias, fairness, and transparency in computational systems aims to illuminate and mitigate harms, and could thus help inform reflections on possible negative impacts of particular pieces of technical work. The prevalent notion of computational harms -- narrowly construed as either allocational or representational harms -- does not fully capture the context dependent and unobservable nature of harms across the wide range of AI infused systems. The current literature primarily addresses only a small range of examples of harms to motivate algorithmic fixes, overlooking the wider scope of probable harms and the way these harms may affect different stakeholders. The system affordances and possible usage scenarios may also exacerbate harms in unpredictable ways, as they determine stakeholders' control (including non-users) over how they interact with a system output. To effectively assist in anticipating and identifying harmful uses, we argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, uses, and outputs, as well as viable proxies for assessing harms in the widest sense.
翻译:2020年NeurIPS 2020年要求提交研究论文时列入关于“潜在的邪恶用途和失败后果”的影响说明。 在研究、设计和实施系统时,预测风险的一个关键挑战是如何克服Clarke(1962年)所称的“想象力不足”的少数伤害实例。 越来越多的关于计算系统偏差、公平和透明度的研究旨在揭示和减轻伤害,从而帮助思考某些技术工作可能产生的负面影响。 计算伤害 -- -- 狭义地解释为分配或表述伤害 -- -- 的普遍概念不能充分反映广泛的人工智能系统各种伤害的背景依赖性和不可观察性。 目前的文献主要只涉及为数不多的损害实例,以激励算法修正,忽视潜在伤害的更广泛范围以及这些伤害可能影响到不同的利益攸关方的方式。 系统是否可靠和可能的使用情景还可能加剧伤害,因为它们决定了利益攸关方(包括非用户)如何与系统产出互动。 有效协助预测危害范围以及确定潜在用途的更广泛范围,我们必须考虑系统潜在用途的预测、潜在用途。