There has been a surge of interest in continual learning and federated learning, both of which are important in deep neural networks in real-world scenarios. Yet little research has been done regarding the scenario where each client learns on a sequence of tasks from a private local data stream. This problem of federated continual learning poses new challenges to continual learning, such as utilizing knowledge from other clients, while preventing interference from irrelevant knowledge. To resolve these issues, we propose a novel federated continual learning framework, Federated Weighted Inter-client Transfer (FedWeIT), which decomposes the network weights into global federated parameters and sparse task-specific parameters, and each client receives selective knowledge from other clients by taking a weighted combination of their task-specific parameters. FedWeIT minimizes interference between incompatible tasks, and also allows positive knowledge transfer across clients during learning. We validate our \emph{FedWeIT}~against existing federated learning and continual learning methods under varying degrees of task similarity across clients, and our model significantly outperforms them with a large reduction in the communication cost.
翻译:对不断学习和联合学习的兴趣激增,这在现实世界中深层神经网络中都很重要。然而,对于每个客户从私人本地数据流中学习一系列任务的情景,研究很少。这个联合持续学习问题给持续学习带来了新的挑战,例如利用其他客户的知识,同时防止来自无关知识的干扰。为了解决这些问题,我们提议了一个全新的联合持续学习框架,即联邦加权客户间传输(FedWeIT),它将网络的重量分解成全球联合参数和零散的任务特定参数,而每个客户通过对任务特定参数的加权组合从其他客户那里获得选择性知识。FedWeIT最大限度地减少互不相容的任务之间的干扰,并允许客户在学习期间进行积极的知识转让。我们确认我们的“Femph{FedWeIT ⁇ ”是针对不同客户不同程度任务下现有的联合学习和持续学习方法的,我们的模型大大超出它们的通信成本。