Policy and guideline proposals for ethical artificial-intelligence research have proliferated in recent years. These are supposed to guide the socially-responsible development of AI for the common good. However, there typically exist incentives for non-cooperation (i.e., non-adherence to such policies and guidelines); and, these proposals often lack effective mechanisms to enforce their own normative claims. The situation just described constitutes a social dilemma; namely, a situation where no one has an individual incentive to cooperate, though mutual cooperation would lead to the best outcome for all involved. In this paper, we use stochastic evolutionary game dynamics to model this social dilemma in the context of the ethical development of artificial intelligence. This formalism allows us to isolate variables that may be intervened upon, thus providing actionable suggestions for increased cooperation amongst numerous stakeholders in AI. Our results show how stochastic effects can help make cooperation viable in such a scenario. They suggest that coordination for a common good should be attempted in smaller groups in which the cost for cooperation is low, and the perceived risk of failure is high. This provides insight into the conditions under which we should expect such ethics proposals to be successful with regard to their scope, scale, and content.
翻译:近年来,道德人工智能研究的道德人工智能政策和准则建议激增,旨在指导AI为共同利益对社会负责的发展,然而,通常存在着不合作的激励机制(即不遵守此类政策和准则);这些建议往往缺乏执行自身规范性主张的有效机制;刚才描述的情况构成了一种社会困境;即尽管相互合作会给所有有关方面带来最佳结果,但没有人有个人合作的动力;在本文中,我们利用随机进化的游戏动态在人造智能的道德发展背景下模拟这种社会两难境地;这种形式主义使我们能够孤立可能干预的变量,从而为在AI中众多利益攸关方之间加强合作提出可行的建议;我们的结果表明,在这种情景下,为了共同利益而协调,在合作费用较低、已知的失败风险很高的较小群体中,应当尝试。这使我们能够深入了解在什么条件下,我们期望这些道德建议的内容和范围能够取得成功。