In November 2023, the UK and US announced the creation of their AI Safety Institutes (AISIs). Five other jurisdictions have followed in establishing AISIs or similar institutions, with more likely to follow. While there is considerable variation between these institutions, there are also key similarities worth identifying. This primer describes one cluster of similar AISIs, the "first wave," consisting of the Japan, UK, and US AISIs. First-wave AISIs have several fundamental characteristics in common: they are technical government institutions, have a clear mandate related to the safety of advanced AI systems, and lack regulatory powers. Safety evaluations are at the center of first-wave AISIs. These techniques test AI systems across tasks to understand their behavior and capabilities on relevant risks, such as cyber, chemical, and biological misuse. They also share three core functions: research, standards, and cooperation. These functions are critical to AISIs' work on safety evaluations but also support other activities such as scientific consensus-building and foundational AI safety research. Despite its growing popularity as an institutional model, the AISI model is not free from challenges and limitations. Some analysts have criticized the first wave of AISIs for specializing too much in a sub-area and for being potentially redundant with existing institutions, for example. Future developments may rapidly change this landscape, and particularities of individual AISIs may not be captured by our broad-strokes description. This policy brief aims to outline the core elements of first-wave AISIs as a way of encouraging and improving conversations on this novel institutional model, acknowledging this is just a simplified snapshot rather than a timeless prescription.
翻译:暂无翻译