AI-generated images have become so good in recent years that individuals often cannot distinguish them any more from "real" images. This development, combined with the rapid spread of AI-generated content online, creates a series of societal risks. Watermarking, a technique that involves embedding information within images and other content to indicate their AI-generated nature, has emerged as a primary mechanism to address the risks posed by AI-generated content. Indeed, watermarking and AI labelling measures are now becoming a legal requirement in many jurisdictions, including under the 2024 European Union AI Act. Despite the widespread use of AI image generation systems, the practical implications and the current status of implementation of these measures remain largely unexamined. The present paper therefore provides both an empirical and a legal analysis of these measures. In our legal analysis, we identify four categories of generative AI deployment scenarios and outline how the legal obligations could apply in each category. In our empirical analysis, we find that only a minority number of AI image generators currently implement adequate watermarking (38%) and deep fake labelling (18%) practices. In response, we suggest a range of avenues of how the implementation of these legally mandated techniques can be improved, and publicly share our tooling for the detection of watermarks in images.
翻译:暂无翻译