Current Knowledge Distillation (KD) methods for semantic segmentation often guide the student to mimic the teacher's structured information generated from individual data samples. However, they ignore the global semantic relations among pixels across various images that are valuable for KD. This paper proposes a novel Cross-Image Relational KD (CIRKD), which focuses on transferring structured pixel-to-pixel and pixel-to-region relations among the whole images. The motivation is that a good teacher network could construct a well-structured feature space in terms of global pixel dependencies. CIRKD makes the student mimic better structured semantic relations from the teacher, thus improving the segmentation performance. Experimental results over Cityscapes, CamVid and Pascal VOC datasets demonstrate the effectiveness of our proposed approach against state-of-the-art distillation methods. The code is available at https://github.com/winycg/CIRKD.
翻译:目前的语义分解知识蒸馏方法(KD)常常引导学生模仿教师从单个数据样本中生成的结构化信息,但是它们忽视了各种图像之间全球的语义关系,而这些图像对于KD来说是有价值的。本文提出一个新的跨图像关系KD(CIRKD),重点是将结构化像素转换到像素和像素转换到像素,以及像素转换到区域,其动机是良好的教师网络可以在全球像素依赖性方面构建一个结构化良好的特征空间。CIRKD使学生模仿教师结构化的语义关系更加完善,从而改进了分解性表现。CamVid和Pascal VOC数据集的实验结果展示了我们针对艺术状态提炼方法的拟议方法的有效性。该代码可在 https://github.com/winycg/CIRKD上查阅。