In this paper, we mainly focus on the problem of how to learn additional feature representations for few-shot image classification through pretext tasks (e.g., rotation or color permutation and so on). This additional knowledge generated by pretext tasks can further improve the performance of few-shot learning (FSL) as it differs from human-annotated supervision (i.e., class labels of FSL tasks). To solve this problem, we present a plug-in Hierarchical Tree Structure-aware (HTS) method, which not only learns the relationship of FSL and pretext tasks, but more importantly, can adaptively select and aggregate feature representations generated by pretext tasks to maximize the performance of FSL tasks. A hierarchical tree constructing component and a gated selection aggregating component is introduced to construct the tree structure and find richer transferable knowledge that can rapidly adapt to novel classes with a few labeled images. Extensive experiments show that our HTS can significantly enhance multiple few-shot methods to achieve new state-of-the-art performance on four benchmark datasets. The code is available at: https://github.com/remiMZ/HTS-ECCV22.
翻译:在本文中,我们主要侧重于如何通过托辞任务(如轮换或颜色变异等)为少发图像分类学习更多特征描述的问题。这种由托辞任务产生的额外知识可以进一步改进少发学习(FSL)的绩效,因为它不同于人文附加说明的监督(即FSL任务类标签 ) 。 为了解决这个问题,我们提出了一个插插插式高层次树木结构(HTS)方法,它不仅了解FSL的关系和托辞任务,而且更重要的是,它能够适应性地选择和综合通过托辞任务生成的特征描述,以最大限度地发挥FSL任务的绩效。引入了一条分层树构筑构件和封闭式选择集成组件,以构建树木结构,并找到能够迅速适应带有少数贴标签图像的新类的更丰富的可转让知识。广泛的实验表明,我们的HTS可以大大增强多发数发方法,在四个基准数据集上实现新的状态性能。代码见:https://github.com/remiMZ/HTS-ECV。