As consensus across the various published AI ethics principles is approached, a gap remains between high-level principles and practical techniques that can be readily adopted to design and develop responsible AI systems. We examine the practices and experiences of researchers and engineers from Australia's national scientific research agency (CSIRO), who are involved in designing and developing AI systems for many application areas. Semi-structured interviews were used to examine how the practices of the participants relate to and align with a set of high-level AI ethics principles proposed by the Australian Government. The principles comprise: (1) privacy protection and security, (2) reliability and safety, (3) transparency and explainability, (4) fairness, (5) contestability, (6) accountability, (7) human-centred values, (8) human, social and environmental wellbeing. Discussions on the gained insights from the interviews include various tensions and trade-offs between the principles, and provide suggestions for implementing each high-level principle. We also present suggestions aiming to enhance associated support mechanisms.
翻译:由于在各种已公布的AI道德原则之间达成了共识,在高级别原则与实际技术之间仍然存在差距,这些技术可以随时用于设计和开发负责的AI系统;我们审查澳大利亚国家科学研究机构的研究人员和工程师的做法和经验,他们参与设计和开发用于许多应用领域的AI系统;进行半结构性访谈,审查参与者的做法与澳大利亚政府提出的一套高级别AI道德原则的关系和一致;这些原则包括:(1) 隐私保护和保障;(2) 可靠性和安全;(3) 透明度和解释性;(4) 公平性;(5) 可竞争性;(6) 问责制;(7) 以人为中心的价值观;(8) 人、社会和环境福祉;讨论访谈获得的见解,包括这些原则之间的各种紧张关系和取舍,并就执行每项高级别原则提出建议;我们还提出旨在加强相关支持机制的建议。</s>