Motivated by mitigating potentially harmful impacts of technologies, the AI community has formulated and accepted mathematical definitions for certain pillars of accountability: e.g. privacy, fairness, and model transparency. Yet, we argue this is fundamentally misguided because these definitions are imperfect, siloed constructions of the human values they hope to proxy, while giving the guise that those values are sufficiently embedded in our technologies. Under popularized methods, tensions arise when practitioners attempt to achieve each pillar of fairness, privacy, and transparency in isolation or simultaneously. In this position paper, we push for redirection. We argue that the AI community needs to consider all the consequences of choosing certain formulations of these pillars -- not just the technical incompatibilities, but also the effects within the context of deployment. We point towards sociotechnical research for frameworks for the latter, but push for broader efforts into implementing these in practice.
翻译:为了减轻技术的潜在有害影响,大赦国际为问责制的某些支柱制定并接受了数学定义:例如隐私、公平和模式透明度。然而,我们认为,这根本被误导,因为这些定义不完善,他们希望取代的人类价值观的构建松散,同时打着这些价值观已充分嵌入我们技术的幌子。在普及的方法下,当执业者试图单独或同时实现公平、隐私和透明度的每个支柱时,就会出现紧张关系。在本立场文件中,我们主张重新定位。我们主张,大赦国际需要考虑选择这些支柱的某些表述的所有后果 -- -- 不仅仅是技术不兼容性,而且还要考虑部署背景下的影响。我们指出,为后者的框架进行社会技术研究,但要努力在实际中实施这些框架。