With the introduction of Artificial Intelligence (AI) and related technologies in our daily lives, fear and anxiety about their misuse as well as the hidden biases in their creation have led to a demand for regulation to address such issues. Yet blindly regulating an innovation process that is not well understood, may stifle this process and reduce benefits that society may gain from the generated technology, even under the best intentions. In this paper, starting from a baseline model that captures the fundamental dynamics of a race for domain supremacy using AI technology, we demonstrate how socially unwanted outcomes may be produced when sanctioning is applied unconditionally to risk-taking, i.e. potentially unsafe, behaviours. As an alternative to resolve the detrimental effect of over-regulation, we propose a voluntary commitment approach wherein technologists have the freedom of choice between independently pursuing their course of actions or establishing binding agreements to act safely, with sanctioning of those that do not abide to what they pledged. Overall, this work reveals for the first time how voluntary commitments, with sanctions either by peers or an institution, leads to socially beneficial outcomes in all scenarios envisageable in a short-term race towards domain supremacy through AI technology. These results are directly relevant for the design of governance and regulatory policies that aim to ensure an ethical and responsible AI technology development process.
翻译:随着人工智能(AI)和相关技术在日常生活中的引入,对滥用这些技术的恐惧和焦虑以及其创造过程中的隐蔽偏见导致要求监管解决此类问题;然而,盲目地管理创新进程,即使没有很好理解,也可能扼杀这一进程,并减少社会可能从所产生的技术中获得的利益,即便在最佳意图下也是如此;在本文件中,从一个反映利用AI技术争夺领域至上竞赛根本动态的基线模型开始,我们表明,在制裁无条件地适用于冒险行为时,社会不想要的结果如何产生,即潜在不安全的行为;作为解决过度监管的有害影响的替代办法,我们提议一种自愿承诺办法,即技术专家在独立推行其行动方针或制定具有约束力的安全行动协议,同时制裁不遵守其承诺者之间有选择自由;总体而言,这项工作首次揭示了自愿承诺,即由同侪或机构制裁,如何导致在短期内可预见到的冒险走向域至上至上的行为,产生社会上不想要的结果;作为解决过度监管的有害影响的一种替代办法,我们提议一种自愿承诺办法,即技术,技术专家在独立地进行行动或制定道德管理和制定政策方面直接负责。