This report prepared by the Montreal AI Ethics Institute provides recommendations in response to the National Security Commission on Artificial Intelligence (NSCAI) Key Considerations for Responsible Development and Fielding of Artificial Intelligence document. The report centres on the idea that Responsible AI should be made the Norm rather than an Exception. It does so by utilizing the guiding principles of: (1) alleviating friction in existing workflows, (2) empowering stakeholders to get buy-in, and (3) conducting an effective translation of abstract standards into actionable engineering practices. After providing some overarching comments on the document from the NSCAI, the report dives into the primary contribution of an actionable framework to help operationalize the ideas presented in the document from the NSCAI. The framework consists of: (1) a learning, knowledge, and information exchange (LKIE), (2) the Three Ways of Responsible AI, (3) an empirically-driven risk-prioritization matrix, and (4) achieving the right level of complexity. All components reinforce each other to move from principles to practice in service of making Responsible AI the norm rather than the exception.
翻译:本报告由蒙特利尔大赦国际伦理学研究所编写,针对国家安全委员会人工智能(NSACI)关于负责任发展和提供人工智能的关键考虑,提出建议。报告的核心思想是,负责任的AI应当成为诺姆而不是例外,它利用下列指导原则来这样做:(1) 减轻现有工作流程中的摩擦,(2) 赋予利益攸关方以获得接受的能力,(3) 将抽象标准有效转化为可操作的工程做法。报告在对全国人工智能(NSACI)的文件提出一些总体意见之后,将可操作的框架作为主要贡献,帮助落实从全国人工智能(NSACAI)文件中提出的想法。框架包括:(1) 学习、知识和信息交流(LKIE),(2) 负责任的AI的三种方式,(3) 经验驱动的风险优先化矩阵,(4) 实现正确的复杂程度。所有组成部分都相互加强,从原则转向实践,使负责任的AI成为规范而不是例外。框架包括:(1) 学习、知识和信息交流(LKIE),(2) 负责任的人工智能的三种方式,(3) 以经验驱动的风险优先矩阵,以及(4) 实现正确的复杂程度。所有组成部分都相互加强,以便从原则转向实践,而不是作为例外。