Few-shot learning aims at leveraging knowledge learned by one or more deep learning models, in order to obtain good classification performance on new problems, where only a few labeled samples per class are available. Recent years have seen a fair number of works in the field, introducing methods with numerous ingredients. A frequent problem, though, is the use of suboptimally trained models to extract knowledge, leading to interrogations on whether proposed approaches bring gains compared to using better initial models without the introduced ingredients. In this work, we propose a simple methodology, that reaches or even beats state of the art performance on multiple standardized benchmarks of the field, while adding almost no hyperparameters or parameters to those used for training the initial deep learning models on the generic dataset. This methodology offers a new baseline on which to propose (and fairly compare) new techniques or adapt existing ones.
翻译:少见的学习旨在利用一个或一个以上深层学习模式所学的知识,以便获得关于新问题的良好分类表现,因为每个班级只有几个贴标签的样本。近些年来,实地的工作数量相当,采用了多种成分的方法。但经常出现的问题是,使用亚优培训模式来获取知识,导致对拟议方法是否带来收益,而不是在没有引入要素的情况下使用更好的初始模型进行质询。在这项工作中,我们提出了一个简单的方法,在多个标准化的实地基准上达到甚至超过最新水平,同时在用于培训通用数据集初始深层学习模型的模型时几乎没有增加超分数或参数。这种方法提供了一个新的基线,可以据此提出(和公平地比较)新技术或调整现有技术。