Most few-shot learning models utilize only one modality of data. We would like to investigate qualitatively and quantitatively how much will the model improve if we add an extra modality (i.e. text description of the image), and how it affects the learning procedure. To achieve this goal, we propose four types of fusion method to combine the image feature and text feature. To verify the effectiveness of improvement, we test the fusion methods with two classical few-shot learning models - ProtoNet and MAML, with image feature extractors such as ConvNet and ResNet12. The attention-based fusion method works best, which improves the classification accuracy by a large margin around 30% comparing to the baseline result.
翻译:多数微小的学习模型只使用一种数据模式。 我们希望从质量和数量上调查如果增加一种额外模式(即图像的文字描述),该模型将改进多少,以及它如何影响学习程序。为了实现这一目标,我们提出了四种混合方法,将图像特征和文本特征结合起来。为了核实改进的效果,我们用两种传统的微小学习模型(ProtoNet和MAML)测试聚合方法,两种模式是:ProtoNet和MAML,两种是图像特征提取器,例如ConvNet和ResNet12。基于注意的聚合方法效果最好,这样可以大大提高分类准确性,比基准结果高出30%左右。