In this paper, we address two practical challenges of distributed learning in multi-agent network systems, namely personalization and resilience. Personalization is the need of heterogeneous agents to learn local models tailored to their own data and tasks, while still generalizing well; on the other hand, the learning process must be resilient to cyberattacks or anomalous training data to avoid disruption. Motivated by a conceptual affinity between these two requirements, we devise a distributed learning algorithm that combines distributed gradient descent and the Friedkin-Johnsen model of opinion dynamics to fulfill both of them. We quantify its convergence speed and the neighborhood that contains the final learned models, which can be easily controlled by tuning the algorithm parameters to enforce a more personalized/resilient behavior. We numerically showcase the effectiveness of our algorithm on synthetic and real-world distributed learning tasks, where it achieves high global accuracy both for personalized models and with malicious agents compared to standard strategies.
翻译:本文针对多智能体网络系统中分布式学习的两个实际挑战——个性化与鲁棒性——展开研究。个性化指异构智能体需要学习适应其自身数据与任务的局部模型,同时保持良好的泛化能力;另一方面,学习过程必须具备抵御网络攻击或异常训练数据的鲁棒性,以避免学习过程被破坏。受这两个需求之间概念关联性的启发,我们设计了一种结合分布式梯度下降与Friedkin-Johnsen观点动力学模型的分布式学习算法,以同时满足这两方面要求。我们量化了算法的收敛速度以及包含最终学习模型的邻域范围,该范围可通过调整算法参数轻松控制,从而强化更个性化或更鲁棒的行为特性。我们在合成数据与真实场景的分布式学习任务上进行了数值实验,结果表明相较于标准策略,本算法在个性化模型场景及存在恶意智能体的情况下均能实现较高的全局准确率。