In the summer of 2021, users on the livestreaming platform Twitch were targeted by a wave of "hate raids," a form of attack that overwhelms a streamer's chatroom with hateful messages, often through the use of bots and automation. Using a mixed-methods approach, we combine a quantitative measurement of attacks across the platform with interviews of streamers and third-party bot developers. We present evidence that confirms that some hate raids were highly-targeted, hate-driven attacks, but we also observe another mode of hate raid similar to networked harassment and specific forms of subcultural trolling. We show that the streamers who self-identify as LGBTQ+ and/or Black were disproportionately targeted and that hate raid messages were most commonly rooted in anti-Black racism and antisemitism. We also document how these attacks elicited rapid community responses in both bolstering reactive moderation and developing proactive mitigations for future attacks. We conclude by discussing how platforms can better prepare for attacks and protect at-risk communities while considering the division of labor between community moderators, tool-builders, and platforms.
翻译:2021年夏天,流生平台Twitch的用户成为“仇恨突袭”的目标,这种袭击形式使流者聊天室中充满仇恨信息,通常使用机器人和自动化。我们采用混合方法,将跨平台袭击的定量计量与对流体和第三方机器人开发者的访谈结合起来。我们出示了证据,证实一些仇恨突袭是高度有针对性的、仇恨驱动的袭击,但我们也观察到另一种仇恨突袭模式,类似于网络骚扰和特定形式的亚文化突袭。我们表明,自称为LGBT和/或Black的流体被过度锁定目标,仇恨突袭信息通常植根于反黑人种族主义和反犹太主义中。我们还记录了这些袭击如何在加强反应调节和为未来袭击制定预防性缓解措施方面引起社区快速反应。我们通过讨论平台如何更好地准备袭击和保护风险社区,同时考虑社区管理员、工具建设者和平台之间的分工。