There has been a surge of interest in continual learning and federated learning, both of which are important in deep neural networks in real-world scenarios. Yet little research has been done regarding the scenario where each client learns on a sequence of tasks from a private local data stream. This problem of federated continual learning poses new challenges to continual learning, such as utilizing knowledge from other clients, while preventing interference from irrelevant knowledge. To resolve these issues, we propose a novel federated continual learning framework, Federated Weighted Inter-client Transfer (FedWeIT), which decomposes the network weights into global federated parameters and sparse task-specific parameters, and each client receives selective knowledge from other clients by taking a weighted combination of their task-specific parameters. FedWeIT minimizes interference between incompatible tasks, and also allows positive knowledge transfer across clients during learning. We validate our FedWeIT against existing federated learning and continual learning methods under varying degrees of task similarity across clients, and our model significantly outperforms them with a large reduction in the communication cost. Code is available at https://github.com/wyjeong/FedWeIT
翻译:对不断学习和联合学习的兴趣激增,这在现实世界情景下的深神经网络中都很重要。然而,对于每个客户从私人本地数据流中学习一系列任务的情景,没有做多少研究。这个联合持续学习问题对持续学习提出了新的挑战,例如利用其他客户的知识,同时防止不相干的知识的干扰。为了解决这些问题,我们提出一个新的联合持续学习框架,即联邦加权客户间传输(Federal Weighted Intercent Transport)(FedWeIT),它将网络的重量分解成全球联合参数和稀有的任务特定参数,而每个客户通过对其特定任务参数的加权组合从其他客户那里获得选择性知识。FedWeIT尽量减少不相干任务之间的干扰,并允许客户在学习期间进行积极的知识转让。我们验证我们的FedWIT,以不同程度的任务相似的现有节制学习和持续学习方法为基础,我们的模型大大超出它们,通信成本大幅下降。法规可在https://github.com/wyjeng/FedITWe查阅。