This article argues that frontier artificial intelligence (AI) developers need an internal audit function. First, it describes the role of internal audit in corporate governance: internal audit evaluates the adequacy and effectiveness of a company's risk management, control, and governance processes. It is organizationally independent from senior management and reports directly to the board of directors, typically its audit committee. In the IIA's Three Lines Model, internal audit serves as the third line and is responsible for providing assurance to the board, while the Combined Assurance Framework highlights the need to coordinate the activities of internal and external assurance providers. Next, the article provides an overview of key governance challenges in frontier AI development: dangerous capabilities can arise unpredictably and undetected; it is difficult to prevent a deployed model from causing harm; frontier models can proliferate rapidly; it is inherently difficult to assess frontier AI risks; and frontier AI developers do not seem to follow best practices in risk governance. Finally, the article discusses how an internal audit function could address some of these challenges: internal audit could identify ineffective risk management practices; it could ensure that the board of directors has a more accurate understanding of the current level of risk and the adequacy of the developer's risk management practices; and it could serve as a contact point for whistleblowers. In light of rapid progress in AI research and development, frontier AI developers need to strengthen their risk governance. Instead of reinventing the wheel, they should follow existing best practices. While this might not be sufficient, they should not skip this obvious first step.
翻译:暂无翻译