While artificial intelligence (AI) has great potential to transform many industries, there are concerns about its ability to make decisions in a responsible way. Many AI ethics guidelines and principles have been recently proposed by governments and various organisations, covering areas such as privacy, accountability, safety, reliability, transparency, explainability, contestability, and fairness. However, such principles are typically high-level and do not provide tangible guidance on how to design and develop responsible AI systems. To address this shortcoming, we present an empirical study involving interviews with 21 scientists and engineers, designed to gain insight into practitioners' perceptions of AI ethics principles, their possible implementation, and the trade-offs between the principles. The salient findings cover four aspects of AI system development: (i) overall development process, (ii) requirements engineering, (iii) design and implementation, (iv) deployment and operation.
翻译:虽然人工智能(AI)在改变许多行业方面潜力巨大,但人们对它以负责任的方式作出决定的能力感到关切,各国政府和各组织最近提出了许多AI道德准则和原则,涉及隐私、问责制、安全、可靠性、透明度、可解释性、可竞争性和公平性等领域,但这些原则通常是高层次的,没有就如何设计和开发负责任的AI系统提供具体指导。为了解决这一缺陷,我们提出一项经验性研究,涉及与21名科学家和工程师的访谈,目的是深入了解从业人员对AI道德原则的看法、这些原则的可能执行以及这些原则之间的取舍。主要结论涉及AI系统开发的四个方面:(一) 总体开发过程,(二) 要求工程,(三) 设计和实施,(四) 部署和运行。