The objective of this work is to explore how to effectively and efficiently adapt pre-trained visual foundation models to downstream tasks, e.g., image semantic segmentation. Conventional methods usually fine-tuned the entire networks for each specific dataset, which will be burdensome to store massive parameters of these networks. Several recent works attempted to insert some extra trainable parameters into the frozen networks to learn visual prompts for parameter-efficient tuning. However, these works showed poor generality as they were designed specifically for Transformers. Moreover, using limited information in these schemes, they exhibited a poor capacity to learn effective prompts. To alleviate these issues, we propose a novel Inter-Stage Prompt-Matched Framework for generic and effective visual prompt tuning. Specifically, to ensure generality, we divide the pre-trained backbone with frozen parameters into multiple stages and perform prompt learning between different stages, which makes the proposed scheme applicable to various architectures of CNN and Transformer. For effective tuning, a lightweight Semantic-aware Prompt Matcher (SPM) is designed to progressively learn reasonable prompts with a recurrent mechanism, guided by the rich information of interim semantic maps. Working as a deep matched filter of representation learning, the proposed SPM can well transform the output of the previous stage into a desirable input for the next stage, thus achieving the better matching/stimulating for the pre-trained knowledge. Finally, we apply the proposed method to handle various semantic segmentation tasks. Extensive experiments on five benchmarks show that the proposed scheme can achieve a promising trade-off between parameter efficiency and performance effectiveness.
翻译:这项工作的目的是探索如何有效和高效地使经过训练的视觉基础模型适应下游任务,例如图像语义分解。常规方法通常对每个特定数据集的整个网络进行微调,这对存储这些网络的大量参数将造成负担。最近的一些工作试图在冻结的网络中插入一些额外的训练参数,以学习参数效率调试的视觉提示;然而,这些工程由于是专门为变异器设计的,因而不具有一般性能。此外,在这些计划中,利用有限的信息,它们缺乏学习有效快感的能力。为了缓解这些问题,我们提出了一个新的跨系统快速匹配框架,以便进行通用和有效的视觉快速调控。具体地说,为了确保普遍性,我们将经过训练的骨干与冻结参数分为多个阶段,并在不同的阶段中进行迅速学习,使拟议的计划适用于CNN和变异器的各种结构。为了有效调控,它们使用较轻的Smanticle-awaretrial Matcher (SPM) 设计了一种比较合理的及时性学习。为了减轻这些问题,我们建议的一个经常机制,在经过精密的精细的精细的精细的试算方法的指导下,我们所拟的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细图。