There is a struggle in Artificial intelligence (AI) ethics to gain ground in actionable methods and models to be utilized by practitioners while developing and implementing ethically sound AI systems. AI ethics is a vague concept without a consensus of definition or theoretical grounding and bearing little connection to practice. Practice involving primarily technical tasks like software development is not aptly equipped to process and decide upon ethical considerations. Efforts to create tools and guidelines to help people working with AI development have been concentrating almost solely on the technical aspects of AI. A few exceptions do apply, such as the ECCOLA method for creating ethically aligned AI -systems. ECCOLA has proven results in terms of increased ethical considerations in AI systems development. Yet, it is a novel innovation, and room for development still exists. This study aims to extend ECCOLA with a deployment model to drive the adoption of ECCOLA, as any method, no matter how good, is of no value without adoption and use. The model includes simple metrics to facilitate the communication of ethical gaps or outcomes of ethical AI development. It offers the opportunity to assess any AI system at any given lifecycle phase, e.g., opening possibilities like analyzing the ethicality of an AI system under acquisition.
翻译:人工智能(AI)伦理在实践者开发和实施符合道德要求的AI系统时,为在实践者使用可操作的方法和模式中打下基础,在人工智能(AI)伦理学方面,存在着一种斗争,在实践之间没有达成共识或理论依据,与实践关系不大; 人工智能(AI)伦理学是一个模糊的概念; 软件开发等主要涉及技术任务的做法,没有适当的处理和决定道德考虑; 努力创造工具和准则,帮助从事AI开发工作的人几乎完全集中在AI的技术方面; 有一些例外适用,例如ECCOLA 创建符合道德要求的AI系统的方法。 ECCOLA 已经证明,在AI系统开发中增加了道德方面的考虑。 然而,这是一个新的创新,发展空间仍然存在。这项研究的目的是扩大ECCOLA,以部署模式推动ECCOLA的采用,因为任何方法,无论多么好,只要不采用和使用,都没有价值。 该模式包括便于交流道德差距或道德要求的AI发展结果的简单衡量标准。 它为评估任何特定生命周期阶段的任何AI系统提供了评估机会,例如,在AI系统下分析获得的道德要求的可能性。