How can citizens moderate hate, toxicity, and extremism in online discourse? We analyze a large corpus of more than 130,000 discussions on German Twitter over the turbulent four years marked by the migrant crisis and political upheavals. With a help of human annotators, language models, machine learning classifiers, and longitudinal statistical analyses, we discern the dynamics of different dimensions of discourse. We find that expressing simple opinions, not necessarily supported by facts but also without insults, relates to the least hate, toxicity, and extremity of speech and speakers in subsequent discussions. Sarcasm also helps in achieving those outcomes, in particular in the presence of organized extreme groups. More constructive comments such as providing facts or exposing contradictions can backfire and attract more extremity. Mentioning either outgroups or ingroups is typically related to a deterioration of discourse in the long run. A pronounced emotional tone, either negative such as anger or fear, or positive such as enthusiasm and pride, also leads to worse outcomes. Going beyond one-shot analyses on smaller samples of discourse, our findings have implications for the successful management of online commons through collective civic moderation.
翻译:公民如何在网上对话中温和仇恨、毒理和极端主义?我们分析在移民危机和政治动荡四年期间,德国Twitter上有大量超过130,000次讨论,在动荡的四年里,移民危机和政治动荡中,我们分析这些讨论。在人类告发者、语言模型、机器学习分类和纵向统计分析的帮助下,我们发现讨论的不同层面的动态。我们发现,表达简单观点,不一定得到事实的支持,但也没有侮辱,与在随后的讨论中言论和演讲者最不仇恨、毒性和极端性有关。讽刺还有助于实现这些结果,特别是在有组织极端团体在场的情况下。更多的建设性评论,如提供事实或揭露矛盾,可以反弹,吸引更多的极端性。提到群体或群体,通常与长期的言论恶化有关。明显的情绪,如愤怒或恐惧,或热情和自豪感等负面的情绪,也会导致更糟糕的结果。除了对较小的对话样本进行一线分析外,我们的调查结果对通过集体公民温和方式成功管理网上公域产生影响。</s>