The task of Few-shot learning (FSL) aims to transfer the knowledge learned from base categories with sufficient labelled data to novel categories with scarce known information. It is currently an important research question and has great practical values in the real-world applications. Despite extensive previous efforts are made on few-shot learning tasks, we emphasize that most existing methods did not take into account the distributional shift caused by sample selection bias in the FSL scenario. Such a selection bias can induce spurious correlation between the semantic causal features, that are causally and semantically related to the class label, and the other non-causal features. Critically, the former ones should be invariant across changes in distributions, highly related to the classes of interest, and thus well generalizable to novel classes, while the latter ones are not stable to changes in the distribution. To resolve this problem, we propose a novel data augmentation strategy dubbed as PatchMix that can break this spurious dependency by replacing the patch-level information and supervision of the query images with random gallery images from different classes from the query ones. We theoretically show that such an augmentation mechanism, different from existing ones, is able to identify the causal features. To further make these features to be discriminative enough for classification, we propose Correlation-guided Reconstruction (CGR) and Hardness-Aware module for instance discrimination and easier discrimination between similar classes. Moreover, such a framework can be adapted to the unsupervised FSL scenario.
翻译:少见的学习任务(FSL)旨在将从基础类别中获得的知识和足够的贴标签数据传授给缺少已知信息的新类别,目前这是一个重要的研究问题,在现实应用中具有巨大的实际价值。尽管以前曾就少见的学习任务作出大量努力,但我们强调,大多数现有方法没有考虑到FSL情景中抽样选择偏差造成的分布变化。这种选择偏差可能会在语义因果特征之间产生虚假的关联,这些因果特征与等级标签和其他非因果特征有因果关系。关键地说,前一种方法应当贯穿分布的变化,与兴趣类别高度相关,因此广泛适用于新的类别,而后一种方法则不能稳定于分布的变化。为了解决这个问题,我们建议采用新的数据增强战略,以PatchMix为首,通过替换与类别标签相关的随机图像,并用来自不同类别的其他非因果图像对查询图像进行监督,从而打破这种模糊性的相关性。我们理论上表明,这种递增机制,与兴趣类别密切相关,因此,因此对新的类别加以广泛推广,而不能对新的类别加以修正。