Few-shot learning (FSL) methods typically assume clean support sets with accurately labeled samples when training on novel classes. This assumption can often be unrealistic: support sets, no matter how small, can still include mislabeled samples. Robustness to label noise is therefore essential for FSL methods to be practical, but this problem surprisingly remains largely unexplored. To address mislabeled samples in FSL settings, we make several technical contributions. (1) We offer simple, yet effective, feature aggregation methods, improving the prototypes used by ProtoNet, a popular FSL technique. (2) We describe a novel Transformer model for Noisy Few-Shot Learning (TraNFS). TraNFS leverages a transformer's attention mechanism to weigh mislabeled versus correct samples. (3) Finally, we extensively test these methods on noisy versions of MiniImageNet and TieredImageNet. Our results show that TraNFS is on-par with leading FSL methods on clean support sets, yet outperforms them, by far, in the presence of label noise.
翻译:少见的学习方法(FSL)通常假定在新课程培训时使用清洁的辅助组和贴有准确标签的样本。这种假设往往不切实际:支持组,无论多么小,仍然可以包括贴有错误标签的样本。因此,贴上标签的噪音对于FSL方法切实可行至关重要,但这个问题令人惊讶地基本上仍未探讨。为了解决FSL设置中贴有错误标签的样本,我们做出了一些技术贡献。 (1) 我们提供简单而有效的特征汇总方法,改进广受欢迎的FSFL技术,即ProtoNet使用的原型。 (2) 我们描述了新颖的Nosy少热学习变异器模型。TraNFS利用变压器的注意机制对标签错误与正确的样本进行权衡。(3) 最后,我们广泛测试MiniimageNet和铁路尼米格网的噪音。我们的结果显示,TraNFSS在清洁支持组方面采用领先的FSL方法,但远超过标签噪音。