小样本学习(Few-Shot Learning,以下简称 FSL )用于解决当可用的数据量比较少时,如何提升神经网络的性能。在 FSL 中,经常用到的一类方法被称为 Meta-learning。和普通的神经网络的训练方法一样,Meta-learning 也包含训练过程和测试过程,但是它的训练过程被称作 Meta-training 和 Meta-testing。

VIP内容

过去的工作大都聚焦在小类样本类别性能而牺牲了大类样本的性能。本文提出一种无遗忘效应的小类样本目标检测器,能够在实现更好的小类样本类别性能的同时,不掉落大类样本类别的性能。在本文中,我们发现了预训练的检测器很少在未见过的类别上产生假阳性预测,且还发现RPN并非理想的类别无关组件。基于这两点发现,我们设计了Re-detector和Bias-Balanced RPN两个简单而有效的结构,只增加少量参数和推断时间即可实现无遗忘效应的小类样本目标检测。

成为VIP会员查看完整内容
0
11

最新论文

We introduce Transductive Infomation Maximization (TIM) for few-shot learning. Our method maximizes the mutual information between the query features and their label predictions for a given few-shot task, in conjunction with a supervision loss based on the support set. We motivate our transductive loss by deriving a formal relation between the classification accuracy and mutual-information maximization. Furthermore, we propose a new alternating-direction solver, which substantially speeds up transductive inference over gradient-based optimization, while yielding competitive accuracy. We also provide a convergence analysis of our solver based on Zangwill's theory and bound-optimization arguments. TIM inference is modular: it can be used on top of any base-training feature extractor. Following standard transductive few-shot settings, our comprehensive experiments demonstrate that TIM outperforms state-of-the-art methods significantly across various datasets and networks, while used on top of a fixed feature extractor trained with simple cross-entropy on the base classes, without resorting to complex meta-learning schemes. It consistently brings between 2 % and 5 % improvement in accuracy over the best performing method, not only on all the well-established few-shot benchmarks but also on more challenging scenarios, with random tasks, domain shift and larger numbers of classes, as in the recently introduced META-DATASET. Our code is publicly available at https://github.com/mboudiaf/TIM. We also publicly release a standalone PyTorch implementation of META-DATASET, along with additional benchmarking results, at https://github.com/mboudiaf/pytorch-meta-dataset.

0
0
下载
预览
Top