Recent rehearsal-free continual learning (CL) methods guided by prompts achieve strong performance on vision tasks with non-stationary data but remain resource-intensive, hindering real-world edge deployment. We introduce resource-efficient prompting (REP), which improves the computational and memory efficiency of prompt-based rehearsal-free continual learning methods while minimizing accuracy trade-offs. Our approach employs swift prompt selection to refine input data using a carefully provisioned model and introduces adaptive token merging (AToM) and adaptive layer dropping (ALD) for efficient prompt updates. AToM and ALD selectively skip data and model layers while preserving task-specific features during the learning of new tasks. Extensive experiments on multiple image classification datasets demonstrate REP's superior resource efficiency over state-of-the-art rehearsal-free CL methods.
翻译:近期基于提示的无回放持续学习方法在非平稳数据下的视觉任务中表现出色,但仍存在资源消耗大的问题,限制了其在真实边缘场景中的部署。本文提出资源高效提示方法,在最小化精度损失的前提下,显著提升了基于提示的无回放持续学习方法的计算与内存效率。该方法通过快速提示选择机制,利用精心配置的模型优化输入数据,并引入自适应令牌合并与自适应层丢弃技术以实现高效的提示更新。自适应令牌合并与自适应层丢弃在学习新任务时,能够选择性跳过数据和模型层,同时保留任务特异性特征。在多个图像分类数据集上的大量实验表明,REP在资源效率方面优于当前最先进的无回放持续学习方法。