Generative AI risks such as bias and lack of representation impact people who do not interact directly with GAI systems, but whose content does: indirect users. Several approaches to mitigating harms to indirect users have been described, but most require top down or external intervention. An emerging strategy, prompt injections, provides an empowering alternative: indirect users can mitigate harm against them, from within their own content. Our approach proposes prompt injections not as a malicious attack vector, but as a tool for content/image owner resistance. In this poster, we demonstrate one case study of prompt injections for empowering an indirect user, by retaining an image owner's gender and disabled identity when an image is described by GAI.
翻译:暂无翻译