The rapid development of artificial intelligence (AI) has led to increasing concerns about the capability of AI systems to make decisions and behave responsibly. Responsible AI (RAI) refers to the development and use of AI systems that benefit humans, society, and the environment while minimising the risk of negative consequences. To ensure responsible AI, the risks associated with AI systems' development and use must be identified, assessed and mitigated. Various AI risk assessment frameworks have been released recently by governments, organisations, and companies. However, it can be challenging for AI stakeholders to have a clear picture of the available frameworks and determine the most suitable ones for a specific context. Additionally, there is a need to identify areas that require further research or development of new frameworks, as well as updating and maintaining existing ones. To fill the gap, we present a mapping study of 16 existing AI risk assessment frameworks from the industry, governments, and non-government organizations (NGOs). We identify key characteristics of each framework and analyse them in terms of RAI principles, stakeholders, system lifecycle stages, geographical locations, targeted domains, and assessment methods. Our study provides a comprehensive analysis of the current state of the frameworks and highlights areas of convergence and divergence among them. We also identify the deficiencies in existing frameworks and outlines the essential characteristics of a concrete and connected framework AI risk assessment (C$^2$AIRA) framework. Our findings and insights can help relevant stakeholders choose suitable AI risk assessment frameworks and guide the design of future frameworks towards concreteness and connectedness.
翻译:人工智能(AI)的快速发展导致了人们对AI系统可以做出决策和负责任地行事能力的越来越担忧。负责任的AI(RAI)是指开发和使用旨在造福人类、社会和环境的AI系统,同时最大程度地减少负面影响的发生。为了确保负责任的AI,必须识别、评估和减轻AI系统开发和使用所带来的风险。最近,各国政府、组织和公司纷纷推出了各种AI风险评估框架。然而,对于AI利益相关者来说,了解现有框架的清晰图片并确定最适合特定环境的框架可能很具挑战性。此外,有必要确定需要进一步研究或开发新框架的领域,以及更新和维护现有框架。为了填补这一空白,我们制定了16个来自工业、政府和非政府组织(NGO)的现有AI风险评估框架的映射研究。我们确定了每个框架的关键特征,并从RAI原则、利益相关者、系统生命周期阶段、地理位置、目标领域和评估方法等方面进行分析。我们的研究全面分析了现有框架的当前状态,并突出了它们之间的收敛和分歧领域。我们还确定了现有框架中的缺陷,并概述了具体和联通的AI风险评估(C$^2$AIRA)框架的基本特征。我们的发现和见解可以帮助相关利益相关者选择合适的AI风险评估框架,并指导未来框架的设计以实现具体性和联通性。