Online community moderators are on the front lines of combating problems like hate speech and harassment, but new modes of interaction can introduce unexpected challenges. In this paper, we consider moderation practices and challenges in the context of real-time, voice-based communication through 25 in-depth interviews with moderators on Discord. Our findings suggest that the affordances of voice-based online communities change what it means to moderate content and interactions. Not only are there new ways to break rules that moderators of text-based communities find unfamiliar, such as disruptive noise and voice raiding, but acquiring evidence of rule-breaking behaviors is also more difficult due to the ephemerality of real-time voice. While moderators have developed new moderation strategies, these strategies are limited and often based on hearsay and first impressions, resulting in problems ranging from unsuccessful moderation to false accusations. Based on these findings, we discuss how voice communication complicates current understandings and assumptions about moderation, and outline ways that platform designers and administrators can design technology to facilitate moderation.
翻译:在线社区主持人站在解决仇恨言论和骚扰等问题的前线,但新的互动模式可能会带来意想不到的挑战。 在本文中,我们通过25次与差异问题主持人的深入访谈,考虑在实时、语音通信背景下的温和做法和挑战。我们的研究结果表明,基于声音的在线社区能力将它的含义改变为温和内容和互动。不仅有打破基于文本的社区主持人发现不熟悉的规则的新方式,如破坏性噪音和语音突袭,而且由于实时声音的短暂性,获取违反规则行为的证据也更加困难。虽然主持人制定了新的温和策略,但这些策略有限,而且往往以道听和第一印象为基础,导致从不成功温和到虚假指控等各种问题。基于这些研究结果,我们讨论了语音通信如何使当前对温和的理解和假设复杂化,并概述了平台设计师和行政管理人员如何设计技术来便利温和。