The goal of continual learning (CL) is to retain knowledge across tasks, but this conflicts with strict privacy required for sensitive training data that prevents storing or memorising individual samples. This work explores the intersection of CL and differential privacy (DP). We advance the theoretical understanding and introduce methods for combining CL and DP. We formulate and clarify the theory for DP CL focusing on composition over tasks. We introduce different variants of choosing classifiers' output label space, show that choosing the output label space directly based on the task data is not DP, and offer a DP alternative. We propose a method for combining pre-trained models with DP prototype classifiers and parameter-efficient adapters learned under DP to address the trade-offs between privacy and utility in a CL setting. We also demonstrate the effectiveness of our methods for varying degrees of domain shift, for blurry tasks, and with different output label settings.
翻译:暂无翻译