Toxic comments are the top form of hate and harassment experienced online. While many studies have investigated the types of toxic comments posted online, the effects that such content has on people, and the impact of potential defenses, no study has captured the long-term behaviors of the accounts that post toxic comments or how toxic comments are operationalized. In this paper, we present a longitudinal measurement study of 929K accounts that post toxic comments on Reddit over an 18~month period. Combined, these accounts posted over 14 million toxic comments that encompass insults, identity attacks, threats of violence, and sexual harassment. We explore the impact that these accounts have on Reddit, the targeting strategies that abusive accounts adopt, and the distinct patterns that distinguish classes of abusive accounts. Our analysis forms the foundation for new time-based and graph-based features that can improve automated detection of toxic behavior online and informs the nuanced interventions needed to address each class of abusive account.
翻译:有毒评论是网上经历的仇恨和骚扰的最主要形式。 虽然许多研究调查了网上张贴的有毒评论的类型、这种内容对人的影响以及潜在防御的影响,但没有研究发现发表有毒评论的账户的长期行为或有毒评论是如何操作的。 在本文中,我们提出了一个关于929K账户的纵向测量研究,在18个月的时间里在Reddit上发表有毒评论。综合起来,这些帐户张贴了1400多万份有毒评论,其中包括侮辱、身份攻击、暴力威胁和性骚扰。我们探讨了这些帐户对Reddit的影响、滥用帐户采用的目标战略以及区分滥用帐户类别的不同模式。我们的分析为新的时间和图表特征奠定了基础,这些特征可以改进网上对有毒行为的自动检测,并为处理每一类滥用帐户所需的细微的干预措施提供信息。