Video-sharing platforms (VSPs) have been increasingly embracing social features such as likes, comments, and Danmaku to boost user engagement. However, viewers may post inappropriate content through video commentary to gain attention or express themselves anonymously and even toxically. For example, on VSPs that support Danmaku, users may even intentionally create a "flood" of Danmaku with inappropriate content shown overlain on videos, disrupting the overall user experience. Despite of the prevalence of inappropriate Danmaku on these VSPs, there is a lack of understanding about the challenges and limitations of Danmaku content moderation on video-sharing platforms. To explore how users perceive the challenges and limitations of current Danmaku moderation methods on VSPs, we conducted probe-based interviews and co-design activities with 21 active end-users. Our findings reveal that the one-size-fits-all rules set by users or customizaibility moderation cannot accurately match the continuous Danmaku. Additionally, the moderation requirements of the Danmaku and the definition of offensive content must dynamically adjust to the video content. Non-intrusive methods should be used to maintain the coherence of the video browsing experience. Our findings inform the design of future Danmaku moderation tools on video-sharing platforms.
翻译:暂无翻译