Learning generic representations with deep networks requires massive training samples and significant computer resources. To learn a new specific task, an important issue is to transfer the generic teacher's representation to a student network. In this paper, we propose to use a metric between representations that is based on a functional view of neurons. We use optimal transport to quantify the match between two representations, yielding a distance that embeds some invariances inherent to the representation of deep networks. This distance defines a regularizer promoting the similarity of the student's representation with that of the teacher. Our approach can be used in any learning context where representation transfer is applicable. We experiment here on two standard settings: inductive transfer learning, where the teacher's representation is transferred to a student network of same architecture for a new related task, and knowledge distillation, where the teacher's representation is transferred to a student of simpler architecture for the same task (model compression). Our approach also lends itself to solving new learning problems; we demonstrate this by showing how to directly transfer the teacher's representation to a simpler architecture student for a new related task.
翻译:与深层网络的学习通用表达方式需要大量的培训样本和大量的计算机资源。 要学习一项新的具体任务, 一个重要的问题是将普通教师的表述方式转移到学生网络。 在本文中, 我们提议使用基于神经元功能视角的表达方式之间的衡量尺度。 我们使用最佳的运输方式量化两种表达方式之间的匹配, 产生一段距离, 隐藏着深层网络代表中固有的一些差异。 这个距离定义了促进学生代表方式与教师代表方式相似的常规化器。 我们的方法可以在任何适用代表方式的学习环境中使用。 我们在这里试验两种标准设置: 感化性转移学习, 将教师的代表方式转移到同一结构的学生网络, 用于一项新的相关任务; 和知识蒸馏, 将教师的代表方式转移给从事同样任务( 模型压缩) 简单结构的学生。 我们的方法还有助于解决新的学习问题。 我们通过展示如何直接将教师的代表方式转移到一个更简单的建筑型学生, 用于一项新的相关任务。