Gossip Learning (GL) is a decentralized learning paradigm where users iteratively exchange and aggregate models with a small set of neighboring peers. Recent GL approaches rely on dynamic communication graphs built and maintained using Random Peer Sampling (RPS) protocols. Thanks to graph dynamics, GL can achieve fast convergence even over extremely sparse topologies. However, the robustness of GL over dy- namic graphs to Byzantine (model poisoning) attacks remains unaddressed especially when Byzantine nodes attack the RPS protocol to scale up model poisoning. We address this issue by introducing GRANITE, a framework for robust learning over sparse, dynamic graphs in the presence of a fraction of Byzantine nodes. GRANITE relies on two key components (i) a History-aware Byzantine-resilient Peer Sampling protocol (HaPS), which tracks previously encountered identifiers to reduce adversarial influence over time, and (ii) an Adaptive Probabilistic Threshold (APT), which leverages an estimate of Byzantine presence to set aggregation thresholds with formal guarantees. Empirical results confirm that GRANITE maintains convergence with up to 30% Byzantine nodes, improves learning speed via adaptive filtering of poisoned models and obtains these results in up to 9 times sparser graphs than dictated by current theory.
翻译:流言学习(GL)是一种去中心化的学习范式,用户通过迭代方式与少量相邻节点交换并聚合模型。近期的GL方法依赖于使用随机对等节点采样(RPS)协议构建和维护的动态通信图。借助图的动态性,GL即使在极稀疏的拓扑结构上也能实现快速收敛。然而,动态图上的GL对拜占庭(模型投毒)攻击的鲁棒性仍未得到解决,尤其是在拜占庭节点攻击RPS协议以扩大模型投毒规模的情况下。我们通过引入GRANITE框架来解决这一问题,该框架能够在存在部分拜占庭节点的稀疏动态图上实现鲁棒学习。GRANITE依赖两个核心组件:(i)历史感知的拜占庭容错对等节点采样协议(HaPS),该协议通过追踪先前遇到的标识符来随时间降低对抗性影响;(ii)自适应概率阈值(APT),该组件利用对拜占庭节点存在比例的估计来设置具有形式化保证的聚合阈值。实验结果表明,GRANITE在拜占庭节点比例高达30%时仍能保持收敛,通过自适应过滤投毒模型提升了学习速度,并在比现有理论要求稀疏9倍的图结构中实现了上述性能。