Continual learning under adversarial conditions remains an open problem, as existing methods often compromise either robustness, scalability, or both. We propose a novel framework that integrates Interval Bound Propagation (IBP) with a hypernetwork-based architecture to enable certifiably robust continual learning across sequential tasks. Our method, SHIELD, generates task-specific model parameters via a shared hypernetwork conditioned solely on compact task embeddings, eliminating the need for replay buffers or full model copies and enabling efficient over time. To further enhance robustness, we introduce Interval MixUp, a novel training strategy that blends virtual examples represented as $\ell_{\infty}$ balls centered around MixUp points. Leveraging interval arithmetic, this technique guarantees certified robustness while mitigating the wrapping effect, resulting in smoother decision boundaries. We evaluate SHIELD under strong white-box adversarial attacks, including PGD and AutoAttack, across multiple benchmarks. It consistently outperforms existing robust continual learning methods, achieving state-of-the-art average accuracy while maintaining both scalability and certification. These results represent a significant step toward practical and theoretically grounded continual learning in adversarial settings.
翻译:对抗条件下的持续学习仍是一个开放性问题,现有方法往往在鲁棒性、可扩展性或两者之间做出妥协。我们提出了一种新颖框架,将区间边界传播(IBP)与基于超网络的架构相结合,以实现跨序列任务的可证明鲁棒的持续学习。我们的方法SHIELD通过仅以紧凑任务嵌入为条件的共享超网络生成任务特定模型参数,无需重放缓冲区或完整模型副本,从而随时间推移实现高效学习。为进一步增强鲁棒性,我们引入了区间混合增强(Interval MixUp),这是一种创新的训练策略,通过混合以MixUp点为中心的$\ell_{\infty}$球所表示的虚拟样本来实现。该技术利用区间算术,在保证可证明鲁棒性的同时缓解包裹效应,从而产生更平滑的决策边界。我们在多个基准测试中,针对包括PGD和AutoAttack在内的强白盒对抗攻击评估SHIELD。该方法始终优于现有鲁棒持续学习方法,在保持可扩展性和可证明性的同时实现了最先进的平均准确率。这些成果标志着在对抗环境下实现实用且理论完备的持续学习方面迈出了重要一步。