Given sufficient training data on the source domain, cross-domain few-shot learning (CD-FSL) aims at recognizing new classes with a small number of labeled examples on the target domain. The key to addressing CD-FSL is to narrow the domain gap and transferring knowledge of a network trained on the source domain to the target domain. To help knowledge transfer, this paper introduces an intermediate domain generated by mixing images in the source and the target domain. Specifically, to generate the optimal intermediate domain for different target data, we propose a novel target guided dynamic mixup (TGDM) framework that leverages the target data to guide the generation of mixed images via dynamic mixup. The proposed TGDM framework contains a Mixup-3T network for learning classifiers and a dynamic ratio generation network (DRGN) for learning the optimal mix ratio. To better transfer the knowledge, the proposed Mixup-3T network contains three branches with shared parameters for classifying classes in the source domain, target domain, and intermediate domain. To generate the optimal intermediate domain, the DRGN learns to generate an optimal mix ratio according to the performance on auxiliary target data. Then, the whole TGDM framework is trained via bi-level meta-learning so that TGDM can rectify itself to achieve optimal performance on target data. Extensive experimental results on several benchmark datasets verify the effectiveness of our method.
翻译:鉴于源域的培训数据充足,跨域少片学习(CD-FSL)旨在识别目标域上有少量标签例子的新类别,解决CD-FSL的关键是缩小领域差距,将受过源域培训的网络的知识转移至目标域。为了帮助知识转让,本文件介绍了在源域和目标域中混合图像产生的中间域。具体地说,为了为不同目标数据生成最佳中间域,我们提议了一个新的目标指导动态混合框架(TGDM),利用目标数据引导通过动态混合生成混合生成混合图像。拟议的TGDM框架包含学习分类师的Mixup-3T网络和学习最佳混合率的动态比率生成网络(DRGN)。为了更好地转让知识,拟议的Mixup-3T网络包含三个分支,在源域、目标域和中间域中进行分类的共享参数分类。为生成最佳中间域,DRGM学会根据辅助目标数据的性能生成最佳混合比比。随后,通过测试整个TGMDM结果,通过测试整个SMM结果,通过T-BMMM结果,通过SBBBMM AS AS AS SA AS AS AS AS AS AS 。