Social media platforms (SMPs) leverage algorithmic filtering (AF) as a means of selecting the content that constitutes a user's feed with the aim of maximizing their rewards. Selectively choosing the contents to be shown on the user's feed may yield a certain extent of influence, either minor or major, on the user's decision-making, compared to what it would have been under a natural/fair content selection. As we have witnessed over the past decade, algorithmic filtering can cause detrimental side effects, ranging from biasing individual decisions to shaping those of society as a whole, for example, diverting users' attention from whether to get the COVID-19 vaccine or inducing the public to choose a presidential candidate. The government's constant attempts to regulate the adverse effects of AF are often complicated, due to bureaucracy, legal affairs, and financial considerations. On the other hand SMPs seek to monitor their own algorithmic activities to avoid being fined for exceeding the allowable threshold. In this paper, we mathematically formalize this framework and utilize it to construct a data-driven statistical algorithm to regulate the AF from deflecting users' beliefs over time, along with sample and complexity guarantees. We show that our algorithm is robust against potential adversarial users. This state-of-the-art algorithm can be used either by authorities acting as external regulators or by SMPs for self-regulation.
翻译:社交媒体平台(SMPs)利用算法过滤器(AF)作为选择构成用户饲料的内容的手段,目的是最大限度地增加其回报。选择用户饲料上显示的内容,可能会对用户决策产生某种程度的影响,无论是微小还是大的影响,而不是自然/公平内容选择的影响。正如我们在过去十年中所看到的那样,算法过滤可能会产生有害的副作用,从偏向个人决定到塑造整个社会的决定,例如转移用户注意力,从是否获得COVID-19疫苗或引导公众选择总统候选人。政府不断试图调控AF的不利影响,往往由于官僚主义、法律事务和财政考虑而变得复杂。另一方面,SMPs试图监测其自身的算法活动,以避免因超过允许的门槛而受到罚款。在本文件中,我们数学化了这一框架,并利用它来构建数据驱动的统计算法,以调控AFF,避免让用户在时间上获得COVID-19疫苗,或诱使公众选择总统候选人。政府不断试图调控AFFF的不良影响,由于官僚主义、法律事务和财政考虑,往往会变得复杂。另一方面,SMPsmal 试图监测他们自己的算法,我们作为自我调节的自我调节者。我们作为自我调节的自我调节的代法,可以证明。