Research on fairness, accountability, transparency and ethics of AI-based interventions in society has gained much-needed momentum in recent years. However it lacks an explicit alignment with a set of normative values and principles that guide this research and interventions. Rather, an implicit consensus is often assumed to hold for the values we impart into our models - something that is at odds with the pluralistic world we live in. In this paper, we put forth the doctrine of universal human rights as a set of globally salient and cross-culturally recognized set of values that can serve as a grounding framework for explicit value alignment in responsible AI - and discuss its efficacy as a framework for civil society partnership and participation. We argue that a human rights framework orients the research in this space away from the machines and the risks of their biases, and towards humans and the risks to their rights, essentially helping to center the conversation around who is harmed, what harms they face, and how those harms may be mitigated.
翻译:近年来,关于基于大赦国际的社会干预措施的公平性、问责制、透明度和道德的研究已获得急需的势头,但缺乏与指导这种研究和干预的一套规范性价值观和原则的明确一致,相反,人们往往认为隐含的共识是对我们模式所赋予的价值观所持有的 -- -- 这与我们所生活的多元世界不相容。 在本文中,我们提出了普遍人权理论,作为一套全球显著和跨文化公认的价值观,可作为负责任的大赦国际明确价值调整的基础框架,并讨论其作为民间社会伙伴关系和参与框架的效力。我们争辩说,人权框架指导着这一空间的研究,远离机器及其偏见的风险,针对人权及其权利所面临的风险,从根本上帮助围绕谁受到伤害、他们面临的伤害以及如何减轻这些伤害。