We consider two fundamental and related issues currently faced by Artificial Intelligence (AI) development: the lack of ethics and interpretability of AI decisions. Can interpretable AI decisions help to address ethics in AI? Using a randomized study, we experimentally show that the empirical and liberal turn of the production of explanations tends to select AI explanations with a low denunciatory power. Under certain conditions, interpretability tools are therefore not means but, paradoxically, obstacles to the production of ethical AI since they can give the illusion of being sensitive to ethical incidents. We also show that the denunciatory power of AI explanations is highly dependent on the context in which the explanation takes place, such as the gender or education level of the person to whom the explication is intended for. AI ethics tools are therefore sometimes too flexible and self-regulation through the liberal production of explanations do not seem to be enough to address ethical issues. We then propose two scenarios for the future development of ethical AI: more external regulation or more liberalization of AI explanations. These two opposite paths will play a major role on the future development of ethical AI.
翻译:我们考虑了人工智能(AI)发展目前面临的两个基本和相关问题:缺乏伦理道德和大赦国际决定的解释性。可以解释的大赦国际决定有助于在AI中处理伦理问题?我们通过随机研究,实验性地表明,从经验上和自由的解释性转变倾向于以较低的发音能力选择AI解释性解释性解释性解释性解释性解释性解释性解释性工具,因此,在某些情况下,解释性工具不是手段,反而是产生伦理AI的障碍性解释性解释性分析的假象,因为这些工具会使道德事件变得敏感。我们还表明,大赦国际解释性解释的发音性能力在很大程度上取决于解释发生的背景,例如解释对象的性别或教育水平。因此,大赦国际的伦理性工具有时过于灵活和自律,通过自由的解释性解释性解释性解释性工具似乎不足以解决伦理问题。然后,我们提出了今后发展伦理AI的两种设想:更多的外部监管性或对AI解释性解释性解释性解释性更大程度的自由化。这两种相反的方法将在道德AI的未来发展中起重要作用。