AI ethics is an emerging field with multiple, competing narratives about how to best solve the problem of building human values into machines. Two major approaches are focused on bias and compliance, respectively. But neither of these ideas fully encompasses ethics: using moral principles to decide how to act in a particular situation. Our method posits that the way data is labeled plays an essential role in the way AI behaves, and therefore in the ethics of machines themselves. The argument combines a fundamental insight from ethics (i.e. that ethics is about values) with our practical experience building and scaling machine learning systems. We want to build AI that is actually ethical by first addressing foundational concerns: how to build good systems, how to define what is good in relation to system architecture, and who should provide that definition. Building ethical AI creates a foundation of trust between a company and the users of that platform. But this trust is unjustified unless users experience the direct value of ethical AI. Until users have real control over how algorithms behave, something is missing in current AI solutions. This causes massive distrust in AI, and apathy towards AI ethics solutions. The scope of this paper is to propose an alternative path that allows for the plurality of values and the freedom of individual expression. Both are essential for realizing true moral character.
翻译:AI 伦理是一个新兴领域,它涉及如何最好地解决将人类价值观建设成机器的问题的多重、相互竞争的叙事。两个主要方法分别侧重于偏见和合规。但这两个概念都没有完全包含道德:使用道德原则来决定如何在特定情况下采取行动。我们的方法认为,将数据标签的方式在AI的行为方式方面,以及因此在机器本身的伦理道德方面发挥着至关重要的作用。这个论点将伦理学(即道德是价值观)的基本见解与我们的实际经验建设和扩大机器学习系统结合起来。我们想首先解决基本问题,从而建立实际上符合道德的大赦国际:如何建立良好的系统,如何界定与系统架构有关的什么是好的,以及谁应该提供这一定义。建立道德的大赦国际为公司和该平台的用户之间建立信任奠定了基础。但是,除非用户体验到道德AI的直接价值,否则这种信任是没有道理的。直到用户真正控制了算法的行为方式,在当前的AI解决方案中就缺少某种东西。这导致了AI中的大规模不信任,对AI伦理学解决方案的冷漠。本文的范围是提出实现个人价值和真正表现的替代途径。