To curb the problem of false information, social media platforms like Twitter started adding warning labels to content discussing debunked narratives, with the goal of providing more context to their audiences. Unfortunately, these labels are not applied uniformly and leave large amounts of false content unmoderated. This paper presents LAMBRETTA, a system that automatically identifies tweets that are candidates for soft moderation using Learning To Rank (LTR). We run LAMBRETTA on Twitter data to moderate false claims related to the 2020 US Election and find that it flags over 20 times more tweets than Twitter, with only 3.93% false positives and 18.81% false negatives, outperforming alternative state-of-the-art methods based on keyword extraction and semantic search. Overall, LAMBRETTA assists human moderators in identifying and flagging false information on social media.
翻译:为了遏制虚假信息问题,Twitter等社交媒体平台开始在讨论被拆解的叙事内容上添加警告标签,目的是为受众提供更多背景。 不幸的是,这些标签没有统一应用,大量虚假内容没有更新。 本文展示了LAMBRETTA(LAMBRETA)系统,该系统自动识别了通过学习排名(LTR)进行软调的推文。 我们在Twitter数据上运行LAMBRETTA(LAMBETA),以缓和与2020年美国大选有关的虚假指控,发现它张贴的推文比Twitter多20倍多,只有3.93%的假正数和18.81%的假否定数,超过了基于关键词提取和语义搜索的替代最新技术。 总体而言,LAMBRETTA协助人类主持人识别和标出社交媒体上的虚假信息。