In few-shot classification, the aim is to learn models able to discriminate classes using only a small number of labeled examples. In this context, works have proposed to introduce Graph Neural Networks (GNNs) aiming at exploiting the information contained in other samples treated concurrently, what is commonly referred to as the transductive setting in the literature. These GNNs are trained all together with a backbone feature extractor. In this paper, we propose a new method that relies on graphs only to interpolate feature vectors instead, resulting in a transductive learning setting with no additional parameters to train. Our proposed method thus exploits two levels of information: a) transfer features obtained on generic datasets, b) transductive information obtained from other samples to be classified. Using standard few-shot vision classification datasets, we demonstrate its ability to bring significant gains compared to other works.
翻译:在几小类分类中,目的是学习能够使用少量标签例子对类别进行歧视的模型,在这方面,我们提议采用图表神经网络(GNNs),目的是利用同时处理的其他样品中所含的信息,即文献中通常称为转基因环境的信息。这些GNNs全部经过培训,并配有主干特征提取器。在本文件中,我们建议采用一种新方法,只依靠图表来对地物矢量进行内插,从而形成一个传输学习环境,没有额外的参数来培训。因此,我们提议的方法利用了两个层次的信息:(a) 从通用数据集获得的传输特征,b) 从其他样品获得的转基因信息进行分类。我们使用标准的几发视觉分类数据集,展示了它与其他工程相比取得重大收益的能力。