The domain shift between the source and target domain is the main challenge in Cross-Domain Few-Shot Learning (CD-FSL). However, the target domain is absolutely unknown during the training on the source domain, which results in lacking directed guidance for target tasks. We observe that since there are similar backgrounds in target domains, it can apply self-labeled samples as prior tasks to transfer knowledge onto target tasks. To this end, we propose a task-expansion-decomposition framework for CD-FSL, called Self-Taught (ST) approach, which alleviates the problem of non-target guidance by constructing task-oriented metric spaces. Specifically, Weakly Supervised Object Localization (WSOL) and self-supervised technologies are employed to enrich task-oriented samples by exchanging and rotating the discriminative regions, which generates a more abundant task set. Then these tasks are decomposed into several tasks to finish the task of few-shot recognition and rotation classification. It helps to transfer the source knowledge onto the target tasks and focus on discriminative regions. We conduct extensive experiments under the cross-domain setting including 8 target domains: CUB, Cars, Places, Plantae, CropDieases, EuroSAT, ISIC, and ChestX. Experimental results demonstrate that the proposed ST approach is applicable to various metric-based models, and provides promising improvements in CD-FSL.
翻译:源和目标领域之间的域变是跨域少热学习(CD-FSL)的主要挑战。然而,在源域培训期间,目标领域绝对不为人知,导致对目标任务缺乏定向指导。我们注意到,由于目标领域有相似的背景,因此可以将自贴标签样本作为先前的任务,将知识转移到目标任务。为此,我们建议为CD-FSL(称为Self-Tougt (ST)) 提出一个任务扩展分解框架,称为“Self-Tougt (ST) ” 方法,通过建立面向任务的计量空间来缓解非目标指导的问题。具体地说,在跨度设定下,我们进行了广泛的实验,包括8个目标,交换和轮换歧视性区域,从而丰富了面向任务的样本,从而产生了更丰富的任务组合。之后,这些任务被分解成若干任务,以完成基于微量的识别和轮换分类的任务。它有助于将源知识转移到目标任务中,并侧重于歧视区域。我们根据跨度设定了跨项目标,包括8个目标区域,SWOL(WOL)和自我监督技术技术应用技术,通过交换,提供了SISL结果。