A common assumption in the social learning literature is that agents exchange information in an unselfish manner. In this work, we consider the scenario where a subset of agents aims at deceiving the network, meaning they aim at driving the network beliefs to the wrong hypothesis. The adversaries are unaware of the true hypothesis. However, they will "blend in" by behaving similarly to the other agents and will manipulate the likelihood functions used in the belief update process to launch inferential attacks. We will characterize the conditions under which the network is misled. Then, we will explain that it is possible for such attacks to succeed by showing that strategies exist that can be adopted by the malicious agents for this purpose. We examine both situations in which the agents have access to information about the network model as well as the case in which they do not. For the first case, we show that there always exists a way to construct fake likelihood functions such that the network is deceived regardless of the true hypothesis. For the latter case, we formulate an optimization problem and investigate the performance of the derived attack strategy by establishing conditions under which the network is deceived. We illustrate the learning performance of the network in the aforementioned adversarial setting via simulations. In a nutshell, we clarify when and how a network is deceived in the context of non-Bayesian social learning.
翻译:在社会学习文献中,一个常见的假设是代理商以非自私的方式交换信息。在这个工作中,我们考虑的是某些代理商试图欺骗网络的情景,这意味着他们的目的是将网络信仰推向错误的假设。对手不知道真实的假设。然而,他们的行为与其他代理商相似,从而“突破”了真正的假设。他们将操纵信仰更新过程中使用的可能性功能,以启动推测性攻击。我们将描述网络被误导的条件。然后,我们将解释这种袭击有可能取得成功,通过显示恶意代理商可以为此采用的战略。我们审视两种情况,即这些代理商能够获得网络模型的信息,以及他们不这样做的情况。第一种情况是,我们证明始终存在一种方法来构建假的可能性功能,使网络受到欺骗,而不管真实的假设如何。对于后一种情况,我们将提出一个优化问题,并通过建立网络被欺骗的条件来调查衍生的攻击战略的绩效。我们通过模拟来说明在前述网络中学习网络模式信息的情况,在不进行对抗性欺诈时我们如何学习网络。