Advances in machine learning (ML) technologies have greatly improved Artificial Intelligence (AI) systems. As a result, AI systems have become ubiquitous, with their application prevalent in virtually all sectors. However, AI systems have prompted ethical concerns, especially as their usage crosses boundaries in sensitive areas such as healthcare, transportation, and security. As a result, users are calling for better AI governance practices in ethical AI systems. Therefore, AI development methods are encouraged to foster these practices. This research analyzes the ECCOLA method for developing ethical and trustworthy AI systems to determine if it enables AI governance in development processes through ethical practices. The results demonstrate that while ECCOLA fully facilitates AI governance in corporate governance practices in all its processes, some of its practices do not fully foster data governance and information governance practices. This indicates that the method can be further improved.
翻译:机械学习技术的进步大大改进了人工智能系统,因此,人工智能系统已变得无处不在,几乎在所有部门都普遍应用,然而,人工智能系统引发了伦理问题,特别是其使用跨越保健、运输和安全等敏感领域的界限,因此用户呼吁在职业智能系统中采用更好的人工智能治理做法,因此,鼓励AI开发方法促进这些做法,这项研究分析了ECCOLA开发道德和值得信赖的人工智能系统的方法,以确定它是否能够通过道德做法使AI在发展进程中进行治理。研究结果表明,虽然欧洲人工智能系统在其所有进程中都充分便利了AI在公司治理做法中的治理,但其一些做法并没有完全促进数据治理和信息治理做法,这表明该方法可以进一步改进。