Learning from limited data is a challenging task since the scarcity of data leads to a poor generalization of the trained model. The classical global pooled representation is likely to lose useful local information. Recently, many few shot learning methods address this challenge by using deep descriptors and learning a pixel-level metric. However, using deep descriptors as feature representations may lose the contextual information of the image. And most of these methods deal with each class in the support set independently, which cannot sufficiently utilize discriminative information and task-specific embeddings. In this paper, we propose a novel Transformer based neural network architecture called Sparse Spatial Transformers (SSFormers), which can find task-relevant features and suppress task-irrelevant features. Specifically, we first divide each input image into several image patches of different sizes to obtain dense local features. These features retain contextual information while expressing local information. Then, a sparse spatial transformer layer is proposed to find spatial correspondence between the query image and the entire support set to select task-relevant image patches and suppress task-irrelevant image patches. Finally, we propose to use an image patch matching module for calculating the distance between dense local representations, thus to determine which category the query image belongs to in the support set. Extensive experiments on popular few-shot learning benchmarks show that our method achieves the state-of-the-art performance.
翻译:从有限的数据中学习是一项具有挑战性的任务,因为缺乏数据会导致对经过培训的模式的概括化不足,因此从有限的数据中学习是一项具有挑战性的任务。传统的全球集合代表制可能会失去有用的当地信息。最近,许多少有的学习方法通过使用深度描述器和学习像素级的量度来应对这一挑战。然而,使用深度描述器作为特征表达方式,可能会失去图像的背景资料。这些方法中的大多数都与支助组中的每个类别独立地打交道,而这些支助组无法充分利用歧视性信息和特定任务嵌入。在本文中,我们提议建立一个新型的基于变异器的神经网络结构,称为Sparse空间变异器(SSFormmerers),它可以找到任务相关特性并抑制任务相关特性。具体地说,我们首先将每个输入图像分成几个不同大小的图像补齐,以获得密度的本地特性。这些特性保留了背景信息,然后提出一个空间变异器层在查询图像与选择任务相关图像的全套支持组之间找到空间对应之处。最后,我们提议使用一个图像配对模块的配对模块,用以计算本地图像的宽度,从而确定本地图像的缩缩缩缩缩缩缩缩图。