Knowledge distillation facilitates the training of a compact student network by using a deep teacher one. While this has achieved great success in many tasks, it remains completely unstudied for image-based 6D object pose estimation. In this work, we introduce the first knowledge distillation method for 6D pose estimation. Specifically, we follow a standard approach to 6D pose estimation, consisting of predicting the 2D image locations of object keypoints. In this context, we observe the compact student network to struggle predicting precise 2D keypoint locations. Therefore, to address this, instead of training the student with keypoint-to-keypoint supervision, we introduce a strategy based the optimal transport theory that distills the teacher's keypoint \emph{distribution} into the student network, facilitating its training. Our experiments on several benchmarks show that our distillation method yields state-of-the-art results with different compact student models.
翻译:知识蒸馏有助于通过使用深层教师对紧凑学生网络进行培训。 虽然这在许多任务中取得了巨大成功, 但对于基于图像的 6D 对象的估算仍然完全没有研究。 在这项工作中, 我们引入了第一种基于 6D 的 6D 构成估计的知识蒸馏方法。 具体地说, 我们对 6D 构成估计采取标准方法, 包括预测对象关键点的 2D 图像位置。 在这方面, 我们观察了 紧凑学生网络, 以努力预测精确的 2D 关键点位置。 因此, 要解决这个问题, 我们不训练 关键点到 6D 关键点 的监管, 我们引入了一种基于 优化的 交通理论, 将教师的 关键点 \ emph{ 分布 输入学生网络, 便利其培训 。 我们在几个基准上的实验显示, 我们的 蒸馏方法 产生不同 紧凑学生模型的 最新结果 。