Few-shot classification aims to recognize unseen classes with few labeled samples from each class. Many meta-learning models for few-shot classification elaborately design various task-shared inductive bias (meta-knowledge) to solve such tasks, and achieve impressive performance. However, when there exists the domain shift between the training tasks and the test tasks, the obtained inductive bias fails to generalize across domains, which degrades the performance of the meta-learning models. In this work, we aim to improve the robustness of the inductive bias through task augmentation. Concretely, we consider the worst-case problem around the source task distribution, and propose the adversarial task augmentation method which can generate the inductive bias-adaptive 'challenging' tasks. Our method can be used as a simple plug-and-play module for various meta-learning models, and improve their cross-domain generalization capability. We conduct extensive experiments under the cross-domain setting, using nine few-shot classification datasets: mini-ImageNet, CUB, Cars, Places, Plantae, CropDiseases, EuroSAT, ISIC and ChestX. Experimental results show that our method can effectively improve the few-shot classification performance of the meta-learning models under domain shift, and outperforms the existing works.
翻译:少见的分类旨在识别隐蔽的班级,每个班级的标签样本很少。许多少见分类的元学习模型详细设计了各种任务共享的感官偏差(元知识),以完成这些任务,并取得令人印象深刻的性能。然而,当培训任务和测试任务之间出现领域变化时,获得的感官偏差无法在跨域中进行概括化,这降低了元学习模型的性能。在这项工作中,我们的目标是通过任务增强来提高导入门偏差的稳健性。具体地说,我们考虑了来源任务分布上最坏的情况,并提出了对抗性任务增强方法,该方法可以产生感化偏向性“切换”的任务。我们的方法可以用作各种元学习模型的简单的插件和游戏模块,提高它们交叉的全局化能力。我们在跨场设置下进行广泛的实验,使用9个微小的分类数据集:小型ImageNet、CUB、Cars、place、PlantaDisasirebles、Crodeal-stalstalstalmograduction romastrisal rogress laction ex romograduction shal exmoduction roduction rogress 和在现有的变换制方法下有效地改进了现有变换制方法。