Large generative AI models (LGAIMs), such as ChatGPT or Stable Diffusion, are rapidly transforming the way we communicate, illustrate, and create. However, AI regulation, in the EU and beyond, has primarily focused on conventional AI models, not LGAIMs. This paper will situate these new generative models in the current debate on trustworthy AI regulation, and ask how the law can be tailored to their capabilities. After laying technical foundations, the legal part of the paper proceeds in four steps, covering (1) direct regulation, (2) data protection, (3) content moderation, and (4) policy proposals. It suggests a novel terminology to capture the AI value chain in LGAIM settings by differentiating between LGAIM developers, deployers, professional and non-professional users, as well as recipients of LGAIM output. We tailor regulatory duties to these different actors along the value chain and suggest four strategies to ensure that LGAIMs are trustworthy and deployed for the benefit of society at large. Rules in the AI Act and other direct regulation must match the specificities of pre-trained models. In particular, regulation should focus on concrete high-risk applications, and not the pre-trained model itself, and should include (i) obligations regarding transparency and (ii) risk management. Non-discrimination provisions (iii) may, however, apply to LGAIM developers. Lastly, (iv) the core of the DSA content moderation rules should be expanded to cover LGAIMs. This includes notice and action mechanisms, and trusted flaggers. In all areas, regulators and lawmakers need to act fast to keep track with the dynamics of ChatGPT et al.
翻译:大规模生成式 AI 模型 (LGAIM),如 ChatGPT 或 Stable Diffusion,正在快速改变我们的交流、插图和创造方式。然而,欧盟及其他地方的 AI 监管主要关注传统的 AI 模型,而非 LGAIM。本文将把这些新的生成式模型置于当前的可信 AI 监管争议中,并询问法律如何适应它们的能力。在先介绍技术基础之后,本文的法律部分分为四步,涵盖 (1) 直接监管、(2) 数据保护、(3) 内容管理和 (4) 政策建议。本文建议一种新颖的术语来捕捉 LGAIM 环境中的 AI 价值链,通过区分 LGAIM 开发者、部署者、专业用户、非专业用户以及 LGAIM 输出的接收者来不同地考虑监管职责。我们针对这些不同角色的价值链量身定制监管职责,并提出了四种策略,以确保 LGAIM 可信并为整个社会造福。AI 证规和其他直接监管的规则必须匹配预训练模型的特定性。特别是,监管应专注于具体的高风险应用,而不是预训练模型本身,同时应包括 (i) 透明度和 (ii) 风险管理的义务。然而,(iii) 非歧视条款可能适用于 LGAIM 开发者。最后,DSA 内容管理规则的核心应扩展到涵盖 LGAIM。这包括通知和行动机制以及可信举报人。在所有领域,监管机构和立法者需要快速行动,以跟上 ChatGPT 等模型的动态。