This paper examines the state of affairs on Frontier Safety Policies in light of capability progress and growing expectations held by government actors and AI safety researchers from these safety policies. It subsequently argues that FSPs should evolve to a more granular version, which this paper calls FSPs Plus. Compared to the first wave of FSPs led by a subset of frontier AI companies, FSPs Plus should be built around two main pillars. First, FSPs Plus should adopt precursory capabilities as a new, clearer, and more comprehensive set of metrics. In this respect, this paper recommends that international or domestic standardization bodies develop a standardized taxonomy of precursory components to high-impact capabilities that FSPs Plus could then adopt by reference. The Frontier Model Forum could lead the way by establishing preliminary consensus amongst frontier AI developers on this topic. Second, FSPs Plus should expressly incorporate AI safety cases and establish a mutual feedback mechanism between FSPs Plus and AI safety cases. To establish such a mutual feedback mechanism, FSPs Plus could be updated to include a clear commitment to make AI safety cases at different milestones during development and deployment, to build and adopt safety measures based on the content and confidence of AI safety cases, and, also on this basis, to keep updating and adjusting FSPs Plus.
翻译:暂无翻译