Large generative AI models (LGAIMs), such as ChatGPT or Stable Diffusion, are rapidly transforming the way we communicate, illustrate, and create. However, AI regulation, in the EU and beyond, has primarily focused on conventional AI models, not LGAIMs. This paper will situate these new generative models in the current debate on trustworthy AI regulation, and ask how the law can be tailored to their capabilities. The paper proceeds in three steps, covering (1) direct regulation, (2) content moderation, and (3) policy proposals. It finishes by making two distinct policy proposals to ensure that LGAIMs are trustworthy and deployed for the benefit of society at large. First, rules in the AI Act and other direct regulation must match the specificities of pre-trained models. In particular, concrete high-risk applications, and not the pre-trained model itself, should be the object of high-risk obligations. Moreover, detailed transparency obligations are warranted. Non-discrimination provisions may, however, apply to LGAIM developers. Second, the core of the DSA content moderation rules should be expanded to cover LGAIMs. This includes notice and action mechanisms, and trusted flaggers. In all areas, regulators and lawmakers need to act fast to keep track with the dynamics of ChatGPT et al.
翻译:然而,在欧盟内外,大赦国际的条例主要侧重于传统的大赦国际模式,而不是LGAIIMs。本文件将把这些新的基因模式放在目前关于可靠的大赦国际条例的辩论中,并询问如何使法律适合其能力。文件分三个步骤进行,包括:(1) 直接监管,(2) 内容节制,(3) 政策建议。最后,提出两个不同的政策建议,以确保LGAIIMs是可靠的,并且为整个社会的利益而部署。首先,AIAIMs的规则和其他直接条例必须与预先培训的模式的具体特点相匹配。特别是,具体的高风险应用,而不是事先培训的模式本身,应当是高风险义务的对象。此外,详细的透明度义务是有必要的。但不歧视条款可能适用于LGAIM的开发者。第二,DSA内容调适规则的核心应当扩大到涵盖LGAIIMs。这包括通知和行动机制,以及追踪监管者与监管者之间的快速动态。