In federated learning, benign participants aim to optimize a global model collaboratively. However, the risk of \textit{privacy leakage} cannot be ignored in the presence of \textit{semi-honest} adversaries. Existing research has focused either on designing protection mechanisms or on inventing attacking mechanisms. While the battle between defenders and attackers seems never-ending, we are concerned with one critical question: is it possible to prevent potential attacks in advance? To address this, we propose the first game-theoretic framework that considers both FL defenders and attackers in terms of their respective payoffs, which include computational costs, FL model utilities, and privacy leakage risks. We name this game the Federated Learning Security Game (FLSG), in which neither defenders nor attackers are aware of all participants' payoffs. To handle the \textit{incomplete information} inherent in this situation, we propose associating the FLSG with an \textit{oracle} that has two primary responsibilities. First, the oracle provides lower and upper bounds of the payoffs for the players. Second, the oracle acts as a correlation device, privately providing suggested actions to each player. With this novel framework, we analyze the optimal strategies of defenders and attackers. Furthermore, we derive and demonstrate conditions under which the attacker, as a rational decision-maker, should always follow the oracle's suggestion \textit{not to attack}.
翻译:在联邦学习中,良好的参与者联合优化全局模型。然而,在半诚实的对手存在的情况下,不能忽视隐私泄露的风险。现有的研究集中于设计保护机制或发明攻击机制,而我们关注一个关键问题:是否可能预先防范潜在的攻击?为了解决这个问题,我们提出了第一个考虑FL防御者和攻击者的博弈论框架,从它们各自的收益角度考虑,包括计算成本、FL模型效用和隐私泄露风险。我们将这个游戏称为联邦学习安全游戏(FLSG),其中既没有防御者也没有攻击者了解所有参与者的收益。为了处理这种情况下固有的“不完全信息”,我们建议将FLSG与一个Oracle相关联,其主要职责包括提供玩家收益的下限和上限,以及作为相关设备提供私人建议给每个玩家。通过这种新颖的框架,我们分析了防御者和攻击者的最优策略。此外,我们推导并展示了一些条件,根据这些条件,攻击者作为一个理性决策制定者,应该始终遵循Oracle的建议“不攻击”。