This paper focuses on an under-explored yet important problem: Federated Class-Continual Learning (FCCL), where new classes are dynamically added in federated learning. Existing FCCL works suffer from various limitations, such as requiring additional datasets or storing the private data from previous tasks. In response, we first demonstrate that non-IID data exacerbates catastrophic forgetting issue in FL. Then we propose a novel method called TARGET (federat\textbf{T}ed cl\textbf{A}ss-continual lea\textbf{R}nin\textbf{G} via \textbf{E}xemplar-free dis\textbf{T}illation), which alleviates catastrophic forgetting in FCCL while preserving client data privacy. Our proposed method leverages the previously trained global model to transfer knowledge of old tasks to the current task at the model level. Moreover, a generator is trained to produce synthetic data to simulate the global distribution of data on each client at the data level. Compared to previous FCCL methods, TARGET does not require any additional datasets or storing real data from previous tasks, which makes it ideal for data-sensitive scenarios.
翻译:本文侧重于一个探索不足但又重要的问题: 联邦级持续学习(FCCL), 联邦级持续学习(FCCL) 新类别在联盟学习中动态添加。 现有的FCCL 工作受到各种限制, 例如要求额外的数据集或储存前任务中的私人数据。 作为回应, 我们首先证明非IID数据会加剧FL中的灾难性遗忘问题。 然后我们提出一种叫TARGET( fateratt\ textbf{T}T}Tlextbf{A}s- continual lea\ textbf{R} nin\ textbf{G} 通过\ textb{G} textb} 来模拟数据的全球分布。 与以前的 FCCL 方法相比, TARGET 并不要求任何真实的数据存储前一种数据任务, TRGET 也不要求任何真实的数据存储前一种数据任务 。</s>