Knowledge distillation is a promising approach to transfer capabilities from complex teacher models to smaller, resource-efficient student models that can be deployed easily, particularly in task-aware scenarios. However, existing methods of task-aware distillation typically require substantial quantities of data which may be unavailable or expensive to obtain in many practical scenarios. In this paper, we address this challenge by introducing a novel strategy called Counterfactual-explanation-infused Distillation CoD for few-shot task-aware knowledge distillation by systematically infusing counterfactual explanations. Counterfactual explanations (CFEs) refer to inputs that can flip the output prediction of the teacher model with minimum perturbation. Our strategy CoD leverages these CFEs to precisely map the teacher's decision boundary with significantly fewer samples. We provide theoretical guarantees for motivating the role of CFEs in distillation, from both statistical and geometric perspectives. We mathematically show that CFEs can improve parameter estimation by providing more informative examples near the teacher's decision boundary. We also derive geometric insights on how CFEs effectively act as knowledge probes, helping the students mimic the teacher's decision boundaries more effectively than standard data. We perform experiments across various datasets and LLMs to show that CoD outperforms standard distillation approaches in few-shot regimes (as low as 8-512 samples). Notably, CoD only uses half of the original samples used by the baselines, paired with their corresponding CFEs and still improves performance.
翻译:知识蒸馏是一种将复杂教师模型的能力迁移至更小、资源效率更高的学生模型的有效方法,这类学生模型易于部署,尤其在任务感知场景中。然而,现有的任务感知蒸馏方法通常需要大量数据,而这在许多实际场景中可能难以获取或成本高昂。本文通过引入一种名为"反事实解释增强蒸馏"的新策略来解决这一挑战,该策略通过系统性地融入反事实解释来实现少样本任务感知知识蒸馏。反事实解释指的是能够以最小扰动翻转教师模型输出预测的输入样本。我们的CoD策略利用这些反事实解释,以显著更少的样本精确刻画教师的决策边界。我们从统计和几何两个视角提供了理论保证,以阐明反事实解释在蒸馏中的作用。我们通过数学证明表明,反事实解释通过在教师决策边界附近提供信息量更大的样本,能够改善参数估计。我们还从几何角度揭示了反事实解释如何有效充当知识探针,帮助学生比使用标准数据更有效地模拟教师的决策边界。我们在多个数据集和大语言模型上进行了实验,结果表明在少样本场景下,CoD优于标准蒸馏方法。值得注意的是,CoD仅使用基线方法一半的原始样本及其对应的反事实解释,仍能提升性能。