State of the art (SOTA) few-shot learning (FSL) methods suffer significant performance drop in the presence of domain differences between source and target datasets. The strong discrimination ability on the source dataset does not necessarily translate to high classification accuracy on the target dataset. In this work, we address this cross-domain few-shot learning (CDFSL) problem by boosting the generalization capability of the model. Specifically, we teach the model to capture broader variations of the feature distributions with a novel noise-enhanced supervised autoencoder (NSAE). NSAE trains the model by jointly reconstructing inputs and predicting the labels of inputs as well as their reconstructed pairs. Theoretical analysis based on intra-class correlation (ICC) shows that the feature embeddings learned from NSAE have stronger discrimination and generalization abilities in the target domain. We also take advantage of NSAE structure and propose a two-step fine-tuning procedure that achieves better adaption and improves classification performance in the target domain. Extensive experiments and ablation studies are conducted to demonstrate the effectiveness of the proposed method. Experimental results show that our proposed method consistently outperforms SOTA methods under various conditions.
翻译:源数据集的强烈区分能力不一定转化为目标数据集的高度分类准确性。在这项工作中,我们通过提高模型的概括化能力,解决了跨域的少片学习(CDFSL)问题。具体地说,我们教授模型,以新颖的噪音强化监督自动编码器(NSAE)来捕捉特征分布的更广泛的差异。国家空间局通过联合重建投入和预测投入及其重新组合的标签来培训模型。基于类内关联(ICC)的理论分析表明,从国家空间局学到的特征在目标领域具有更强的歧视和概括化能力。我们还利用了国家空间局的结构,并提出了两步调整程序,以更好地适应和改进目标领域的分类性能。进行了广泛的实验和对比研究,以展示拟议方法的有效性。实验结果显示,我们提出的方法在各种条件下始终不懈地展现了SOTA方法。