Health misinformation on social media devastates physical and mental health, invalidates health gains, and potentially costs lives. Understanding how health misinformation is transmitted is an urgent goal for researchers, social media platforms, health sectors, and policymakers to mitigate those ramifications. Deep learning methods have been deployed to predict the spread of misinformation. While achieving the state-of-the-art predictive performance, deep learning methods lack the interpretability due to their blackbox nature. To remedy this gap, this study proposes a novel interpretable deep learning approach, Generative Adversarial Network based Piecewise Wide and Attention Deep Learning (GAN-PiWAD), to predict health misinformation transmission in social media. Improving upon state-of-the-art interpretable methods, GAN-PiWAD captures the interactions among multi-modal data, offers unbiased estimation of the total effect of each feature, and models the dynamic total effect of each feature when its value varies. We select features according to social exchange theory and evaluate GAN-PiWAD on 4,445 misinformation videos. The proposed approach outperformed strong benchmarks. Interpretation of GAN-PiWAD indicates video description, negative video content, and channel credibility are key features that drive viral transmission of misinformation. This study contributes to IS with a novel interpretable deep learning method that is generalizable to understand other human decision factors. Our findings provide direct implications for social media platforms and policymakers to design proactive interventions to identify misinformation, control transmissions, and manage infodemics.
翻译:了解如何传播健康错误信息是研究人员、社交媒体平台、卫生部门和决策者的一项紧迫目标,以缓解这些影响; 采用深层次的学习方法来预测错误信息的传播; 在取得最先进的预测性表现的同时,深层次的学习方法因其黑盒性质而缺乏可解释性; 为了弥补这一差距,本研究报告建议采用新的可解释的深层次学习方法,即基于Peaswith广度和关注深层学习(GAN-PiWAD)的General Aversarial网络,以预测健康错误信息在社会媒体中的传播。 改进最先进的可解释方法,GAN-PiWAD掌握多模式数据之间的相互作用,对每个特征的总体效果进行公正的估计,并在每个特征的价值不同时,以模型的动态整体效应为模型。 我们根据社会交流理论选择了特征,对4 445个GAN-PiWAD的错误信息视频视频进行了评估。 拟议的方法超越了强健基准。 GAN-PiWAD的解读解释, 将最新信息流化的图像内容和数据分析结果用于人类的图像分析。