A recent study finds that existing few-shot learning methods, trained on the source domain, fail to generalize to the novel target domain when a domain gap is observed. This motivates the task of Cross-Domain Few-Shot Learning (CD-FSL). In this paper, we realize that the labeled target data in CD-FSL has not been leveraged in any way to help the learning process. Thus, we advocate utilizing few labeled target data to guide the model learning. Technically, a novel meta-FDMixup network is proposed. We tackle this problem mainly from two aspects. Firstly, to utilize the source and the newly introduced target data of two different class sets, a mixup module is re-proposed and integrated into the meta-learning mechanism. Secondly, a novel disentangle module together with a domain classifier is proposed to extract the disentangled domain-irrelevant and domain-specific features. These two modules together enable our model to narrow the domain gap thus generalizing well to the target datasets. Additionally, a detailed feasibility and pilot study is conducted to reflect the intuitive understanding of CD-FSL under our new setting. Experimental results show the effectiveness of our new setting and the proposed method. Codes and models are available at https://github.com/lovelyqian/Meta-FDMixup.
翻译:最近的一项研究发现,在源域方面受过培训的现有微小学习方法未能在观察到域间差距时推广到新目标领域。这促使跨Domemain Few-Shot Learning(CD-FSL)的任务。在本文件中,我们认识到CD-FSL的标记目标数据没有以任何方式被利用来帮助学习过程。因此,我们主张使用少数标签目标数据来指导模型学习。从技术上讲,提出了一个新的元数据FDMixup网络。我们主要从两个方面处理这个问题。首先,利用来源和新推出的两种不同类别的目标数据,一个混合模块被重新提出并纳入元学习机制。第二,我们建议与域分类器一道开发一个新的分解模块,以提取分解的域相关和域别特性。这两个模块一起使我们的模型能够缩小域间差距,从而将数据集成一个目标数据集。此外,我们进行了详细的可行性和试点研究,以反映出我们对CD-FSFML的直观理解。根据我们的新设置,M/M 实验结果展示了我们提议的模型。