Few-Shot Learning (FSL) alleviates the data shortage challenge via embedding discriminative target-aware features among plenty seen (base) and few unseen (novel) labeled samples. Most feature embedding modules in recent FSL methods are specially designed for corresponding learning tasks (e.g., classification, segmentation, and object detection), which limits the utility of embedding features. To this end, we propose a light and universal module named transformer-based Semantic Filter (tSF), which can be applied for different FSL tasks. The proposed tSF redesigns the inputs of a transformer-based structure by a semantic filter, which not only embeds the knowledge from whole base set to novel set but also filters semantic features for target category. Furthermore, the parameters of tSF is equal to half of a standard transformer block (less than 1M). In the experiments, our tSF is able to boost the performances in different classic few-shot learning tasks (about 2% improvement), especially outperforms the state-of-the-arts on multiple benchmark datasets in few-shot classification task.
翻译:很少热学习( FSL) 通过在多见( 底部) 和少见( 新奇) 标签样本中嵌入歧视性目标觉识特征,减轻了数据短缺的挑战。 最近的 FSL 方法中的大多数特性嵌入模块是专门设计用于相应学习任务( 如分类、 分解和对象检测), 限制了嵌入功能的效用。 为此, 我们提议了一个光源和通用模块, 名为基于变压器的语义过滤器( tSF), 可用于不同的 FSL 任务。 拟议的 tSF 重新设计基于变压器结构的投入, 由一个语义过滤器过滤器, 不仅将整个基集的知识嵌入新设置, 而且还过滤目标类别的语义特征。 此外, tSF 参数相当于标准变压器块( 不到 1M ) 的一半 。 在实验中, 我们的tSF 能够促进不同经典的微小的学习任务( 2% 改进 ) 的性能提升,, 特别超越了少数分类任务中多个基准数据集的状态 。