Artificial Intelligence (AI) solutions and technologies are being increasingly adopted in smart systems context, however, such technologies are continuously concerned with ethical uncertainties. Various guidelines, principles, and regulatory frameworks are designed to ensure that AI technologies bring ethical well-being. However, the implications of AI ethics principles and guidelines are still being debated. To further explore the significance of AI ethics principles and relevant challenges, we conducted a survey of 99 representative AI practitioners and lawmakers (e.g., AI engineers, lawyers) from twenty countries across five continents. To the best of our knowledge, this is the first empirical study that encapsulates the perceptions of two different types of population (AI practitioners and lawmakers) and the study findings confirm that transparency, accountability, and privacy are the most critical AI ethics principles. On the other hand, lack of ethical knowledge, no legal frameworks, and lacking monitoring bodies are found the most common AI ethics challenges. The impact analysis of the challenges across AI ethics principles reveals that conflict in practice is a highly severe challenge. Moreover, the perceptions of practitioners and lawmakers are statistically correlated with significant differences for particular principles (e.g. fairness, freedom) and challenges (e.g. lacking monitoring bodies, machine distortion). Our findings stimulate further research, especially empowering existing capability maturity models to support the development and quality assessment of ethics-aware AI systems.
翻译:在智能系统的背景下,正在越来越多地采用人工智能(AI)解决方案和技术,然而,在智能系统的背景下,这种技术不断受到伦理不确定性的影响。各种准则、原则和监管框架旨在确保AI技术带来道德福利。然而,AI道德原则和准则的影响仍在辩论之中。为了进一步探讨AI道德原则和相关挑战的重要性,我们对来自五大洲20个国家的99名AI代表从业者和立法者(例如AI工程师、律师)进行了调查。根据我们的最佳知识,这是第一次经验性研究,其中概括了两类不同人口(AI从业者和立法者)的观点,研究结果证实,透明度、问责制和隐私是AI最关键的道德原则。另一方面,缺乏道德知识、没有法律框架和缺乏监督机构,发现AI道德挑战最为常见。对AI道德原则中的挑战的影响分析表明,实践中的冲突是一个非常严峻的挑战。此外,从统计上看,从业者和立法者的看法与特定原则(例如公平、自由)和挑战之间存在重大差异,而研究结果证实,透明度、问责制和隐私是最重要的。另一方面,缺乏道德方面的伦理学机构,缺乏进一步的成熟性评估。