As consensus across the various published AI ethics principles is approached, a gap remains between high-level principles and practical techniques that can be readily adopted to design and develop responsible AI systems. We examine the practices and experiences of researchers and engineers from Australia's national scientific research agency (CSIRO), who are involved in designing and developing AI systems for many application areas. Semi-structured interviews were used to examine how the practices of the participants relate to and align with a set of high-level AI ethics principles proposed by the Australian Government. The principles comprise: (1) privacy protection and security, (2) reliability and safety, (3) transparency and explainability, (4) fairness, (5) contestability, (6) accountability, (7) human-centred values, (8) human, social and environmental wellbeing. Discussions on the gained insights from the interviews include various tensions and trade-offs between the principles, and provide suggestions for implementing each high-level principle. We also present suggestions aiming to enhance associated support mechanisms.
翻译:随着各种发布的AI伦理原则的共识的逐渐形成,高级别原则和可以被纳入到负责任的AI系统设计和开发中的实用技术之间仍存在一定差距。我们考察了来自澳大利亚国家科学研究机构(Australia's national scientific research agency,CSIRO)的研究人员和工程师们在许多应用领域的AI系统设计和开发中的实践和经验。我们使用半结构化访谈来检查参与者的实践如何与澳大利亚政府提出的一组高级别AI伦理原则相关联和一致。这些原则包括:(1)隐私保护和安全,(2)可靠性和安全性,(3)透明度和可解释性,(4)公正,(5)可争辩性,(6)问责制,(7)以人为本的价值观,(8)人类,社会和环境福祉。讨论从访谈中获得的见解,包括各种原则之间的紧张关系和权衡,并提供实现每个高级别原则的建议。我们还提出了旨在增强相关支持机制的建议。