In cross-domain few-shot learning, the core issue is that the model trained on source tasks from source domains can not generalize well to target tasks from the target domain, especially when the domain shift is very large. Motivated by the observation that the domain shift between training tasks and target tasks usually can reflect in their style variation, we propose Task Augmented Meta-Learning (TAML) to conduct style transfer-based task augmentation to improve the domain generalization ability. Firstly, Multi-task Interpolation (MTI) is introduced to perform feature fusion on tasks from different tasks with different styles, which makes more diverse styles available. Furthermore, a novel task-augmentation strategy called Multi-Task Style Transfer (MTST) is put forward to perform style transfer on existing tasks to learn discriminative style-independent features. At last, we introduce Feature Modulation module (FM) to add random styles, which aims to improve the generalization of our model. The proposed TAML increases the diversity of styles of training tasks, and contributes to training a model with better domain generalization ability. The effectiveness is demonstrated via theoretical analysis and thorough experiments on two popular cross-domain few-shot benchmarks.
翻译:在跨外观的微小学习中,核心问题是,在源域源任务方面受过培训的模型不可能在目标域的任务上进行广泛化,特别是当域变换非常大时,特别是当域变换非常大时,对源域源域源任务所培训的模式不能很好地推广到目标域的任务。考虑到培训任务与目标任务之间的域变换通常能够反映其风格的变异性,我们建议任务强化元学习(TAML)来进行风格化的转移任务增强(TAML),以提高广域化能力。首先,引入多任务化(MTMI)是为了对不同类型任务的不同任务进行特征整合,从而提供更多样化的风格。此外,还提出了一项名为多塔斯克样式转移(MTST)的新任务强化战略,以对现有任务进行样式转换,以学习具有歧视性的风格独立的特征。最后,我们引入了功能调整模块(FMM),以添加随机式的样式,目的是改进模型的概观化。拟议的TAML增加了培训任务的样式的多样性,有助于以更好的广域化能力对模型进行培训。通过理论分析和彻底的实验展示效果。