A rapidly increasing amount of human conversation occurs online. But divisiveness and conflict can fester in text-based interactions on social media platforms, in messaging apps, and on other digital forums. Such toxicity increases polarization and, importantly, corrodes the capacity of diverse societies to develop efficient solutions to complex social problems that impact everyone. Scholars and civil society groups promote interventions that can make interpersonal conversations less divisive or more productive in offline settings, but scaling these efforts to the amount of discourse that occurs online is extremely challenging. We present results of a large-scale experiment that demonstrates how online conversations about divisive topics can be improved with artificial intelligence tools. Specifically, we employ a large language model to make real-time, evidence-based recommendations intended to improve participants' perception of feeling understood in conversations. We find that these interventions improve the reported quality of the conversation, reduce political divisiveness, and improve the tone, without systematically changing the content of the conversation or moving people's policy attitudes. These findings have important implications for future research on social media, political deliberation, and the growing community of scholars interested in the place of artificial intelligence within computational social science.
翻译:人类对话数量在网上迅速增加。但分裂和冲突在社交媒体平台、短信应用程序和其他数字论坛的文本互动中会加剧。这种毒性会加剧两极分化,而且,重要的是,会腐蚀不同社会制定有效办法解决影响每个人的复杂社会问题的能力。学者和民间社会团体提倡干预措施,使人际对话在离线环境中减少分裂或更有成效,但将这些努力扩大到在线对话的规模是极具挑战性的。我们介绍了大规模实验的结果,展示了如何用人工智能工具改进关于分歧议题的在线对话。具体地说,我们使用一个大型语言模型来提出实时循证建议,目的是提高参与者对谈话中理解感的认识。我们发现,这些干预措施提高了所报道的对话质量,减少政治分歧,并在不系统地改变谈话内容或改变人们的政策态度的情况下提高语调。这些研究结果对未来关于社会媒体的研究、政治思考以及越来越多的对计算社会科学中人工智能感兴趣的学者群体有着重要影响。