Meta-learning can extract an inductive bias from previous learning experience and assist the training of new tasks. It is often realized through optimizing a meta-model with the evaluation loss of task-specific solvers. Most existing algorithms sample non-overlapping $\mathit{support}$ sets and $\mathit{query}$ sets to train and evaluate the solvers respectively due to simplicity ($\mathcal{S}$/$\mathcal{Q}$ protocol). Different from $\mathcal{S}$/$\mathcal{Q}$ protocol, we can also evaluate a task-specific solver by comparing it to a target model $\mathcal{T}$, which is the optimal model for this task or a model that behaves well enough on this task ($\mathcal{S}$/$\mathcal{T}$ protocol). Although being short of research, $\mathcal{S}$/$\mathcal{T}$ protocol has unique advantages such as offering more informative supervision, but it is computationally expensive. This paper looks into this special evaluation method and takes a step towards putting it into practice. We find that with a small ratio of tasks armed with target models, classic meta-learning algorithms can be improved a lot without consuming many resources. We empirically verify the effectiveness of $\mathcal{S}$/$\mathcal{T}$ protocol in a typical application of meta-learning, $\mathit{i.e.}$, few-shot learning. In detail, after constructing target models by fine-tuning the pre-trained network on those hard tasks, we match the task-specific solvers and target models via knowledge distillation.
翻译:元化学习可以从以往的学习经验中提取感官偏差, 并协助培训新任务。 它通常通过优化一个元模型, 使具体任务解决者失去评估损失来实现。 大多数现有的算法取样不重复 $\ mathit{ support} set and $\ mathet{query} $, 用于培训和评估解决问题者, 分别由于简单 ($\ mathcal{S}$/$\ mathcal} 协议) 。 与 $\ mathcal{ slot{$/ mathcal} 协议不同, 我们也可以通过将它与目标模型 $\ mathcal{T} 比较来评估具体任务解决者 。 本文将这个任务的最佳模型或一个在任务上表现良好的模型 ($\ mathital{s} $/ mathcall} competroduction $rice) 。 尽管研究不足, $mexmall a tremal deal deal deal deal exal deal deal drodustration.