As few-shot object detectors are often trained with abundant base samples and fine-tuned on few-shot novel examples,the learned models are usually biased to base classes and sensitive to the variance of novel examples. To address this issue, we propose a meta-learning framework with two novel feature aggregation schemes. More precisely, we first present a Class-Agnostic Aggregation (CAA) method, where the query and support features can be aggregated regardless of their categories. The interactions between different classes encourage class-agnostic representations and reduce confusion between base and novel classes. Based on the CAA, we then propose a Variational Feature Aggregation (VFA) method, which encodes support examples into class-level support features for robust feature aggregation. We use a variational autoencoder to estimate class distributions and sample variational features from distributions that are more robust to the variance of support examples. Besides, we decouple classification and regression tasks so that VFA is performed on the classification branch without affecting object localization. Extensive experiments on PASCAL VOC and COCO demonstrate that our method significantly outperforms a strong baseline (up to 16\%) and previous state-of-the-art methods (4\% in average). Code will be available at: \url{https://github.com/csuhan/VFA}
翻译:由于少发天体探测器往往经过大量基础样本的培训,并经过微小的新例子的细微调整,所学的模型通常偏向于基础类,对新例子的差异敏感。为了解决这一问题,我们提出一个包含两个新特征聚合办法的元学习框架。更准确地说,我们首先提出一个分类分类分类汇总方法,其中查询和支持特性可以不分类别加以汇总。不同类别之间的相互作用鼓励类类级的认知代表性,减少基类和新颖类之间的混淆。根据CAAA,我们然后提议一种变异性特征聚合(VFA)方法,该方法将支持实例编码成等级支持特征聚合的特征集合特征特征。我们使用变式自动编码来估计等级分布和样本差异,而这种分类和支持特性的分布则更加可靠。此外,我们分解分类和回归任务,使VFA在分类分支上进行,而不影响对象的本地化。关于PACAL VOC和COCO的大规模实验表明,我们的方法大大超越了等级级支持特性集合的特性特性特性特征。我们使用一个强大的基准(向前16号) 。