Few-Shot Learning (FSL) has attracted growing attention in computer vision due to its capability in model training without the need for excessive data. FSL is challenging because the training and testing categories (the base vs. novel sets) can be largely diversified. Conventional transfer-based solutions that aim to transfer knowledge learned from large labeled training sets to target testing sets are limited, as critical adverse impacts of the shift in task distribution are not adequately addressed. In this paper, we extend the solution of transfer-based methods by incorporating the concept of metric-learning and channel attention. To better exploit the feature representations extracted by the feature backbone, we propose Class-Specific Channel Attention (CSCA) module, which learns to highlight the discriminative channels in each class by assigning each class one CSCA weight vector. Unlike general attention modules designed to learn global-class features, the CSCA module aims to learn local and class-specific features with very effective computation. We evaluated the performance of the CSCA module on standard benchmarks including miniImagenet, Tiered-ImageNet, CIFAR-FS, and CUB-200-2011. Experiments are performed in inductive and in/cross-domain settings. We achieve new state-of-the-art results.
翻译:由于在模型培训方面的能力而不需要过多的数据,很少的热学(FSL)在计算机视野方面引起了越来越多的关注。FSL具有挑战性,因为培训和测试类别(基础相对于新版本组)可以大体多样化。常规转让解决方案旨在将从大标签培训组获得的知识转让到目标测试组是有限的,因为任务分配变化的重大不利影响没有得到充分解决。在本文件中,我们通过纳入标准化学习和频道关注的概念,扩展基于转让方法的解决方案。为了更好地利用功能主干柱所提取的特征显示,我们提议了 " 特殊频道注意 " 模块,该模块通过为每类指定一个CSCA重量矢量矢量来学习歧视渠道。与旨在学习全球级特征的一般关注模块不同,CSCA模块旨在学习非常有效的本地和班级特点。我们评估了CSCA模块在标准基准(包括微型IMagenet、铁链-ImageNet、CIFA-FS和CUB-200-2011)方面的绩效。我们评估了CUB-CAR-CAR-CAR-CAR-CAR-CAR-CUB-B-CORD-C-SD)的绩效,这是在州/CMIND-CAD-CAD-CFOR-CAD-CAD-CS-CAD-CAD-CAD-CAD-CAD-CFD-CFD-CSDFAD-CFS-S-CFS-CFD-CS-CS-CFDFDFSD-CFS-S-CFD-S-S-S-S-S-S-CFD-S-S-CFS-S-S-S-S-CFS-S-S-S-C-S-S-S-SD-SD-S-S-CFS-CFSD-CFSDFS-CFD-S-CFS-S-S-CFS-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-CFS-S-S-S-S-S-S-S-S-S-C