Many online hate groups exist to disparage others based on race, gender identity, sex, or other characteristics. The accessibility of these communities allows users to join multiple types of hate groups (e.g., a racist community and misogynistic community), which calls into question whether these peripatetic users could be further radicalized compared to users that stay in one type of hate group. However, little is known about the dynamics of joining multiple types of hate groups, nor the effect of these groups on peripatetic users. In this paper, we develop a new method to classify hate subreddits, and the identities they disparage, which we use to better understand how users become peripatetic (join different types of hate subreddits). The hate classification technique utilizes human-validated LLMs to extract the protected identities attacked, if any, across 168 subreddits. We then cluster identity-attacking subreddits to discover three broad categories of hate: racist, anti-LGBTQ, and misogynistic. We show that becoming active in a user's first hate subreddit can cause them to become active in additional hate subreddits of a different category. We also find that users who join additional hate subreddits, especially of a different category, become more active in hate subreddits as a whole and develop a wider hate group lexicon. We are therefore motivated to train an AI model that we find usefully predicts the hate categories users will become active in based on post text read and written. The accuracy of this model may be partly driven by peripatetic users often using the language of hate subreddits they eventually join. Overall, these results highlight the unique risks associated with hate communities on a social media platform, as discussion of alternative targets of hate may lead users to target more protected identities.
翻译:暂无翻译