Generative AI for image creation emerges as a staple in the toolkit of digital artists, visual designers, and the general public. Social media users have many tools to shape their visual representation: image editing tools, filters, face masks, face swaps, avatars, and AI-generated images. The importance of the right profile image can not be understated: It is crucial for creating the right first impression, sustains trust, and enables communication. Conventionally correct representation of individuals, groups, and collectives may help foster inclusivity, understanding, and respect in society, ensuring that diverse perspectives are acknowledged and valued. While previous research revealed the biases in large image datasets such as ImageNet and inherited biases in the AI systems trained on it, within this work, we look at the prejudices and stereotypes as they emerge from textual prompts used for generating images on Discord using the StableDiffusion model. We analyze over 1.8 million prompts depicting men and women and use statistical methods to uncover how prompts describing men and women are constructed and what words constitute the portrayals of respective genders. We show that the median male description length is systematically shorter than the median female description length, while our findings also suggest a shared practice of prompting regarding the word length distribution. The topic analysis suggests the existence of classic stereotypes in which men are described using dominant qualities such as "strong" and "rugged". In contrast, women are represented with concepts related to body and submission: "beautiful", "pretty", etc. These results highlight the importance of the original intent of the prompting and suggest that cultural practices on platforms such as Discord should be considered when designing interfaces that promote exploration and fair representation.
翻译:暂无翻译