In this paper, we look at the problem of cross-domain few-shot classification that aims to learn a classifier from previously unseen classes and domains with few labeled samples. Recent approaches broadly solve this problem by parameterizing their few-shot classifiers with task-agnostic and task-specific weights where the former is typically learned on a large training set and the latter is dynamically predicted through an auxiliary network conditioned on a small support set. In this work, we focus on the estimation of the latter, and propose to learn task-specific weights from scratch directly on a small support set, in contrast to dynamically estimating them. In particular, through systematic analysis, we show that task-specific weights through parametric adapters in matrix form with residual connections to multiple intermediate layers of a backbone network significantly improves the performance of the state-of-the-art models in the Meta-Dataset benchmark with minor additional cost.
翻译:在本文中,我们审视了跨域的微小分类问题,其目的是从以前看不见的类别和领域中学习分类者,而以前没有贴标签的样本很少。最近采用的方法,通过对其少数碎片分类者进行参数化来广泛解决这一问题,这些分类者具有任务-不可知性和任务特有权重,前者通常在大型培训组中学习,而后者则通过一个附带网络动态预测,以小型支助组为条件。在这项工作中,我们侧重于对后者的估计,并提议直接从零到零学习任务特有权重,直接从一个小型支助组中学习,而不是用动态的估算。特别是,通过系统分析,我们通过矩阵式的参数调整器显示任务特有权重,与一个主干网的多个中间层有剩余连接。在Meta-Dataset基准中,最先进的模型的性能有很大改进,但成本小。