This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence, namely psychology and engineering, and from disciplines aiming to regulate AI innovations, namely AI ethics and law. The aim is to identify shared notions or discrepancies to consider for qualifying AI systems. Relevant concepts are integrated into a matrix intended to help defining more precisely when and how computing tools (programs or devices) may be qualified as AI while highlighting critical features to serve a specific technical, ethical and legal assessment of challenges in AI development. Some adaptations of existing notions of AI characteristics are proposed. The matrix is a risk-based conceptual model designed to allow an empirical, flexible and scalable qualification of AI technologies in the perspective of benefit-risk assessment practices, technological monitoring and regulatory compliance: it offers a structured reflection tool for stakeholders in AI development that are engaged in responsible research and innovation.Pre-print version (achieved on May 2020)
翻译:本文件建议对来自处理情报概念的不同学科(即心理学和工程学)的现有概念,以及来自旨在规范AI创新的学科(即AI道德和法律)的现有概念进行全面分析,目的是查明共同的概念或差异,以便考虑对AI系统进行资格评估;将相关概念纳入一个矩阵,旨在帮助更准确地界定何时以及如何将计算工具(方案或装置)定性为AI,同时强调关键特征,以便对AI发展中的挑战进行具体的技术、道德和法律评估;提议对现有的AI特性概念作一些调整;该矩阵是一个基于风险的概念模型,目的是从收益风险评估做法、技术监测和遵守监管的角度,对AI技术进行经验、灵活和可扩展的定性:它为从事负责任的研究和创新的AI发展中的利益攸关方提供了一个结构化的思考工具。