Recent studies have demonstrated that incorporating trainable prompts into pretrained models enables effective incremental learning. However, the application of prompts in incremental object detection (IOD) remains underexplored. Existing prompts pool based approaches assume disjoint class sets across incremental tasks, which are unsuitable for IOD as they overlook the inherent co-occurrence phenomenon in detection images. In co-occurring scenarios, unlabeled objects from previous tasks may appear in current task images, leading to confusion in prompts pool. In this paper, we hold that prompt structures should exhibit adaptive consolidation properties across tasks, with constrained updates to prevent catastrophic forgetting. Motivated by this, we introduce Parameterized Prompts for Incremental Object Detection (P$^2$IOD). Leveraging neural networks global evolution properties, P$^2$IOD employs networks as the parameterized prompts to adaptively consolidate knowledge across tasks. To constrain prompts structure updates, P$^2$IOD further engages a parameterized prompts fusion strategy. Extensive experiments on PASCAL VOC2007 and MS COCO datasets demonstrate that P$^2$IOD's effectiveness in IOD and achieves the state-of-the-art performance among existing baselines.
翻译:近期研究表明,在预训练模型中引入可训练的提示能够实现有效的增量学习。然而,提示在增量目标检测中的应用仍未被充分探索。现有的基于提示池的方法假设增量任务间的类别集合互不相交,这并不适用于增量目标检测,因为它们忽视了检测图像中固有的共现现象。在共现场景下,先前任务中未标注的物体可能出现在当前任务图像中,导致提示池产生混淆。本文认为,提示结构应具备跨任务的自适应整合特性,并通过约束更新以防止灾难性遗忘。基于此,我们提出了面向增量目标检测的参数化提示方法。P$^2$IOD利用神经网络的全局演化特性,将网络作为参数化提示来自适应地整合跨任务知识。为约束提示结构更新,P$^2$IOD进一步采用了参数化提示融合策略。在PASCAL VOC2007和MS COCO数据集上的大量实验表明,P$^2$IOD在增量目标检测中具有显著效果,并在现有基线方法中达到了最先进的性能。