Existing methods for distillation use the conventional training approach where all samples participate equally in the process and are thus highly inefficient in terms of data utilization. In this paper, a novel data-efficient approach to transfer the knowledge from a teacher model to a student model is presented. Here, the teacher model uses self-regulation to select appropriate samples for training and identifies their significance in the process. During distillation, the significance information can be used along with the soft-targets to supervise the students. Depending on the use of self-regulation and sample significance information in supervising the knowledge transfer process, three types of distillations are proposed - significance-based, regulated, and hybrid, respectively. Experiments on benchmark datasets show that the proposed methods achieve similar performance as other state-of-the-art methods for knowledge distillation while utilizing a significantly less number of samples.
翻译:现有蒸馏方法采用常规培训方法,所有样本都平等地参与这一过程,因此在数据利用方面效率极低;本文件介绍了一种新的数据效率方法,将知识从教师模式转移到学生模式;在这里,教师模式采用自律,为培训选择适当的样本,并确定其在该过程中的意义;在蒸馏过程中,可以使用重要信息,同时使用软目标来监督学生;根据在监督知识转让过程中使用自律和样本重要信息的情况,提出了三类蒸馏方法 -- -- 分别基于重要性、受监管和混合;基准数据集实验表明,拟议方法在使用少量样本的同时,取得了与其他最先进的知识蒸馏方法相似的性能。