In this work, we demonstrate how existing classifiers for identifying toxic comments online fail to generalize to the diverse concerns of Internet users. We survey 17,280 participants to understand how user expectations for what constitutes toxic content differ across demographics, beliefs, and personal experiences. We find that groups historically at-risk of harassment - such as people who identify as LGBTQ+ or young adults - are more likely to to flag a random comment drawn from Reddit, Twitter, or 4chan as toxic, as are people who have personally experienced harassment in the past. Based on our findings, we show how current one-size-fits-all toxicity classification algorithms, like the Perspective API from Jigsaw, can improve in accuracy by 86% on average through personalized model tuning. Ultimately, we highlight current pitfalls and new design directions that can improve the equity and efficacy of toxic content classifiers for all users.
翻译:我们调查17 280名参与者,以了解用户对何为有毒内容的期望在人口统计、信仰和个人经历方面有何不同。我们发现,历史上面临骚扰风险的群体,如确认男女同性恋者、双性恋者、双性恋者和变性者等,更有可能像过去曾亲身经历骚扰的人一样,从Reddit、Twitter或4chan 上随意发表有毒评论。根据我们的调查结果,我们展示了目前一刀切的毒性分类算法,如吉格索的视角 API,通过个性化模型调整,平均提高86%的准确性。最终,我们强调当前的陷阱和新的设计方向,可以改善有毒内容分类方法对所有用户的公平和效力。