Marking biased texts is a practical approach to increase media bias awareness among news consumers. However, little is known about the generalizability of such awareness to new topics or unmarked news articles, and the role of machine-generated bias labels in enhancing awareness remains unclear. This study tests how news consumers may be trained and pre-bunked to detect media bias with bias labels obtained from different sources (Human or AI) and in various manifestations. We conducted two experiments with 470 and 846 participants, exposing them to various bias-labeling conditions. We subsequently tested how much bias they could identify in unlabeled news materials on new topics. The results show that both Human (t(467) = 4.55, p < .001, d = 0.42) and AI labels (t(467) = 2.49, p = .039, d = 0.23) increased correct detection compared to the control group. Human labels demonstrate larger effect sizes and higher statistical significance. The control group (t(467) = 4.51, p < .001, d = 0.21) also improves performance through mere exposure to study materials. We also find that participants trained with marked biased phrases detected bias most reliably (F(834,1) = 44.00, p < .001, {\eta}2part = 0.048). Our experimental framework provides theoretical implications for systematically assessing the generalizability of learning effects in identifying media bias. These findings also provide practical implications for developing news-reading platforms that offer bias indicators and designing media literacy curricula to enhance media bias awareness.
翻译:暂无翻译