Large generative AI models (LGAIMs), such as ChatGPT or Stable Diffusion, are rapidly transforming the way we communicate, illustrate, and create. However, AI regulation, in the EU and beyond, has primarily focused on conventional AI models, not LGAIMs. This paper will situate these new generative models in the current debate on trustworthy AI regulation, and ask how the law can be tailored to their capabilities. After laying technical foundations, the legal part of the paper proceeds in four steps, covering (1) direct regulation, (2) data protection, (3) content moderation, and (4) policy proposals. It suggests a novel terminology to capture the AI value chain in LGAIM settings by differentiating between LGAIM developers, deployers, professional and non-professional users, as well as recipients of LGAIM output. We tailor regulatory duties to these different actors along the value chain and suggest four strategies to ensure that LGAIMs are trustworthy and deployed for the benefit of society at large. Rules in the AI Act and other direct regulation must match the specificities of pre-trained models. In particular, regulation should focus on concrete high-risk applications, and not the pre-trained model itself, and should include (i) obligations regarding transparency and (ii) risk management. Non-discrimination provisions (iii) may, however, apply to LGAIM developers. Lastly, (iv) the core of the DSA content moderation rules should be expanded to cover LGAIMs. This includes notice and action mechanisms, and trusted flaggers. In all areas, regulators and lawmakers need to act fast to keep track with the dynamics of ChatGPT et al.
翻译:大型生成AI模型(LGAIM),如ChatGPT或Stable Diffusion,正在快速改变我们的交流、插图和创作方式。然而,在欧盟和其他地方,AI监管主要集中在传统的AI模型上,而不是LGAIM。本文将把这些新的生成模型置于当前的信任AI监管辩论中,并探讨法律如何应对它们的能力。在奠定技术基础后,本文的法律部分将分为四个步骤,涵盖(1)直接监管、(2)数据保护、(3)内容管理以及(4)政策建议。它提出了一种新颖的术语,以区分LGAIM环境下的AI价值链,包括LGAIM开发人员、部署人员、专业和非专业用户,以及LGAIM输出的接收者。我们针对这个价值链的不同参与者量身定制监管职责,并提出了四种策略,以保证LGAIM的可信度,并确保它们获得社会的潜在实惠。在AI法案和其他直接监管中,函数规则必须与预先训练的模型的特定性质相匹配。特别是,监管应该集中于具体的高风险应用程序,而不是预先训练的模型本身,应包括(i)透明度义务和(ii)风险管理。然而,非歧视性规定可能适用于LGAIM开发者。最后,在所有方面,监管者和立法者需要迅速采取行动,以跟上ChatGPT等技术发展的动向。