Organizations that develop and deploy artificial intelligence (AI) systems need to manage the associated risks - for economic, legal, and ethical reasons. However, it is not always clear who is responsible for AI risk management. The Three Lines of Defense (3LoD) model, which is considered best practice in many industries, might offer a solution. It is a risk management framework that helps organizations to assign and coordinate risk management roles and responsibilities. In this article, I suggest ways in which AI companies could implement the model. I also discuss how the model could help reduce risks from AI: it could identify and close gaps in risk coverage, increase the effectiveness of risk management practices, and enable the board of directors to oversee management more effectively. The article is intended to inform decision-makers at leading AI companies, regulators, and standard-setting bodies.
翻译:开发和部署人工情报(AI)系统的组织,出于经济、法律和道德原因,需要管理相关风险;然而,由谁来负责AI风险管理并不总是很清楚;许多行业认为最佳做法的三线防御(3LOD)模式可能提供解决办法;这是一个风险管理框架,帮助各组织分配和协调风险管理作用和责任;在本条中,我建议AI公司如何执行该模式;我还讨论了该模式如何帮助减少AI的风险:它可以查明和消除风险覆盖方面的差距,提高风险管理做法的效力,使董事会能够更有效地监督管理;该条旨在向AI公司、监管机构和标准制定机构的决策者提供信息。