The spread of disinformation on social media platforms such as Facebook is harmful to society. This harm may manifest as a gradual degradation of public discourse; but it can also take the form of sudden dramatic events such as the recent insurrection on Capitol Hill. The platforms themselves are in the best position to prevent the spread of disinformation, as they have the best access to relevant data and the expertise to use it. However, mitigating disinformation is costly, not only for implementing classification algorithms or employing manual detection, but also because limiting such highly viral content impacts user growth and thus potential advertising revenue. Since the costs of harmful content are borne by other entities, the platform will therefore have no incentive to exercise the socially-optimal level of effort. This problem is similar to the problem of environmental regulation, in which the costs of adverse events are not directly borne by a firm, the mitigation effort of a firm is not observable, and the causal link between a harmful consequence and a specific failure is difficult to prove. In the environmental regulation domain, one solution to this issue is to perform costly monitoring to ensure that the firm takes adequate precautions according a specified rule. However, classifying disinformation is performative, and thus a fixed rule becomes less effective over time. Encoding our domain as a Markov decision process, we demonstrate that no penalty based on a static rule, no matter how large, can incentivize adequate effort by the platform. Penalties based on an adaptive rule can incentivize optimal effort, but counterintuitively, only if the regulator sufficiently overreacts to harmful events by requiring a greater-than-optimal level of effort. We therefore push for mechanisms that elicit platforms' costs of precautionary effort in order to bypass such an overreaction.
翻译:在脸书等社交媒体平台上散布虚假信息对社会有害。这种伤害可能表现为公共言论逐渐退化;但也可能表现为突发性戏剧性事件,如国会山最近发生的叛乱。平台本身最有能力防止虚假信息传播,因为他们最有机会获得相关数据,也最有能力使用这些数据。然而,减少虚假信息的代价昂贵,不仅用于实施分类算法或采用人工检测,而且因为限制这种高病毒内容影响用户增长,从而影响潜在的广告收入。由于有害内容的成本由其他实体承担,因此该平台将没有动力来实施社会上最优化的平台努力。 这一问题与环境监管问题相似,在这个问题上,不利事件的代价不是由一个公司直接承担,公司的缓解努力是无法察觉的,有害后果和具体的失败之间的因果关系是难以证明的。 在环境监管领域,这一问题的一个解决办法是进行成本高昂的监测,以确保公司按照具体规则采取充分的防范措施。然而,将错误信息分类为社会上最优化的平台推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推推,,,,其成本成本成本成本成本成本不高的幅度不高成本成本成本成本成本成本成本成本成本成本成本成本成本成本成本成本成本成本成本成本成本成本成本成本不高,因此令成本成本成本成本成本成本成本越高的激烈,因此令成本越高,因此令成本成本成本越高的推推推推推推推推推推推推推推推推推推推推推推推推推推推推推