Traditionally, distillation has been used to train a student model to emulate the input/output functionality of a teacher. A more useful goal than emulation, yet under-explored, is for the student to learn feature representations that transfer well to future tasks. However, we observe that standard distillation of task-specific teachers actually *reduces* the transferability of student representations to downstream tasks. We show that a multi-head, multi-task distillation method using an unlabeled proxy dataset and a generalist teacher is sufficient to consolidate representations from task-specific teacher(s) and improve downstream performance, outperforming the teacher(s) and the strong baseline of ImageNet pretrained features. Our method can also combine the representational knowledge of multiple teachers trained on one or multiple domains into a single model, whose representation is improved on all teachers' domain(s).
翻译:传统上,蒸馏法一直用于培训学生模式,以效仿教师的投入/产出功能。比模拟(但探索不足)更有用的目标,是让学生学习能顺利向未来任务转移的特征表现。然而,我们注意到,对具体任务教师的标准蒸馏法实际上* 减少* 学生代表的可转让性转移到下游任务。我们表明,使用无标签代用数据集和通才教师的多头、多任务蒸馏法足以整合具体任务教师的表述,提高下游绩效,优于教师,以及图像网预先培训特征的强基线。我们的方法还可以将受过一个或多个领域培训的多名教师的代表性知识纳入单一模式,在所有教师领域都得到改进。