The paper argues that organizations that have the stated goal of building artificial general intelligence (AGI) need an internal audit function. First, it explains what internal audit is: a specific team that performs an ongoing assessment of an organization's risk management practices and reports directly to the board of directors, while being organizationally independent from senior management. Next, the paper discusses the main benefits of internal audit for AGI labs: it can make their risk management practices more effective; ensure that the board of directors has a more accurate view of the current level of risk and the effectiveness of the lab's risk management practices; signal that the lab follows best practices in corporate governance; and serve as a contact point for whistleblowers. However, AGI labs should be aware of a number of limitations: internal audit adds friction; there is not much empirical evidence in support of the above-mentioned benefits; the benefits depend on the people involved and their ability and willingness to identify ineffective risk management practices; setting up and maintaining an internal audit team is costly; and it should only be seen as an additional "layer of defense", not a silver bullet against emerging risks from AI. Finally, the paper provides a blueprint for how AGI labs could set up an internal audit team and suggests concrete things the team would do on a day-to-day basis. These suggestions are based on the International Standards for the Professional Practice of Internal Auditing Standards. In light of rapid progress in AI research and development, AGI labs need to professionalize their risk management practices. Instead of "reinventing the wheel", they should follow existing best practices in corporate governance. This will not be sufficient as they approach AGI, but they should not skip this obvious first step.
翻译:暂无翻译