People form judgments and make decisions based on the information that they observe. A growing portion of that information is not only provided, but carefully curated by social media platforms. Although lawmakers largely agree that platforms should not operate without any oversight, there is little consensus on how to regulate social media. There is consensus, however, that creating a strict, global standard of "acceptable" content is untenable (e.g., in the US, it is incompatible with Section 230 of the Communications Decency Act and the First Amendment). In this work, we propose that algorithmic filtering should be regulated with respect to a flexible, user-driven baseline. We provide a concrete framework for regulating and auditing a social media platform according to such a baseline. In particular, we introduce the notion of a baseline feed: the content that a user would see without filtering (e.g., on Twitter, this could be the chronological timeline). We require that the feeds a platform filters contain "similar" informational content as their respective baseline feeds, and we design a principled way to measure similarity. This approach is motivated by related suggestions that regulations should increase user agency. We present an auditing procedure that checks whether a platform honors this requirement. Notably, the audit needs only black-box access to a platform's filtering algorithm, and it does not access or infer private user information. We provide theoretical guarantees on the strength of the audit. We further show that requiring closeness between filtered and baseline feeds does not impose a large performance cost, nor does it create echo chambers.
翻译:人们形成判断和做出决定的基础是他们观察到的信息。越来越多的信息不仅由社交媒体平台提供,而是经过精心策划的。虽然立法者普遍认为平台不应该没有任何监督运营,但在如何监管社交媒体方面仍然没有共识。然而,有共识的是,建立严格的全球“可接受”内容标准是不可行的(例如,在美国,它与《通讯公正法》(Section 230)和第一修正案不兼容)。在这项工作中,我们建议按照灵活的用户驱动基线监管算法过滤社交媒体。我们提供了一个具体的框架,以便根据这样的基线监管和审计社交媒体平台。特别是,我们引入了基线 Feed 的概念:用户不过滤的内容(例如,在 Twitter 上,这可能是时间序列)。我们要求平台过滤的 Feed 包含其各自基线 Feed 的“相似”信息内容,并设计了一种原则性的方法来衡量相似性。本方法是基于相关建议,即监管政策应增加用户代理权。我们提供了一个审计程序,检查平台是否遵守此要求。值得注意的是,该审计仅需要访问平台的过滤算法黑箱,不会访问或推断任何私人用户信息。我们在审计的强度方面提供了理论保证。此外,我们进一步表明,要求过滤和基线 Feed 之间的接近并不会带来较大的性能成本,也不会创建回声室。