Recent advances in large-scale vision and language models have led to significant progress in zero-shot learning tasks. Methods such as CoOp and CoCoOp have shown that replacing handcrafted prompts with learnable vectors, known as prompt learning, can result in improved performance. However, these models often struggle to generalize to entirely unseen categories. While traditional zero-shot learning techniques benefit from various data augmentation strategies, prompt learning has primarily focused on text-based modifications, leaving the potential of image-based augmentation largely unexplored. In this work, we explore how image-level augmentations, particularly those that introduce attribute-specific variations, can support and enhance prompt learning. Our analysis examines the interaction between these augmentations and soft prompt frameworks, revealing their potential to improve generalization. We also identify a limitation in existing methods, such as CoCoOp, which do not provide explicit guidance for learning prompts that focus on semantically meaningful visual features. To address this, we propose Adding Attributes to Prompt Learning, AAPL, a novel method that introduces adversarial token embeddings to decouple superficial visual variations introduced by augmentation from class-relevant semantic representations. This decoupling enables the learned prompts to concentrate on visually discriminative features that align with the target categories. We conduct comprehensive experiments on eleven benchmark datasets, and AAPL consistently outperforms existing methods across few-shot, zero-shot, cross-dataset, and domain generalization settings. Our source code is publicly available at: https://github.com/Gahyeonkim09/AAPL
翻译:大规模视觉与语言模型的最新进展显著推动了零样本学习任务的发展。CoOp和CoCoOp等方法表明,将手工设计的提示替换为可学习的向量(即提示学习)能够提升模型性能。然而,这些模型在泛化至完全未见过的类别时仍面临困难。传统的零样本学习技术受益于多样化的数据增强策略,而提示学习主要聚焦于基于文本的修改,图像层面的增强潜力尚未得到充分探索。本研究探讨了图像级增强(尤其是引入属性特异性变化的增强)如何支持并增强提示学习。我们分析了此类增强与软提示框架之间的交互作用,揭示了其提升泛化能力的潜力。同时,我们发现现有方法(如CoCoOp)存在局限,未能为学习聚焦于语义相关视觉特征的提示提供明确指导。为此,我们提出“向提示学习添加属性”(AAPL)这一新方法,通过引入对抗性令牌嵌入,解耦由增强引入的表层视觉变化与类别相关的语义表示。这种解耦使学习到的提示能够集中于与目标类别对齐的视觉判别性特征。我们在11个基准数据集上进行了全面实验,AAPL在少样本、零样本、跨数据集和领域泛化设置中均持续优于现有方法。源代码已公开于:https://github.com/Gahyeonkim09/AAPL