Knowledge distillation (KD) has recently emerged as a powerful strategy to transfer knowledge from a pre-trained teacher model to a lightweight student, and has demonstrated its unprecedented success over a wide spectrum of applications. In spite of the encouraging results, the KD process per se poses a potential threat to network ownership protection, since the knowledge contained in network can be effortlessly distilled and hence exposed to a malicious user. In this paper, we propose a novel framework, termed as Safe Distillation Box (SDB), that allows us to wrap a pre-trained model in a virtual box for intellectual property protection. Specifically, SDB preserves the inference capability of the wrapped model to all users, but precludes KD from unauthorized users. For authorized users, on the other hand, SDB carries out a knowledge augmentation scheme to strengthen the KD performances and the results of the student model. In other words, all users may employ a model in SDB for inference, but only authorized users get access to KD from the model. The proposed SDB imposes no constraints over the model architecture, and may readily serve as a plug-and-play solution to protect the ownership of a pre-trained network. Experiments across various datasets and architectures demonstrate that, with SDB, the performance of an unauthorized KD drops significantly while that of an authorized gets enhanced, demonstrating the effectiveness of SDB.
翻译:知识蒸馏(KD)最近成为了将知识从受过培训的教师模式转让给轻量级学生的有力战略,并展示了它在广泛应用方面的前所未有的成功。尽管取得了令人鼓舞的成果,KD过程本身对网络所有权保护构成了潜在威胁,因为网络中所包含的知识可以不费力地蒸馏,从而暴露给恶意用户。在本文件中,我们提出了一个被称为安全蒸馏箱的新颖框架,使我们能够在知识产权保护虚拟箱中包扎一个经过培训的模型。具体地说,SDB将包装模型的推断能力保存给所有用户,但不让未经授权的用户使用KD。另一方面,对于获得授权的用户来说,SDB实施知识增强计划,以加强KD的绩效和学生模式的结果。换句话说,所有用户都可以在SDB中采用一个模型来推断,但只有获得授权的用户才能从模型中获得KDB的接入。拟议的SDB对模型结构没有设置任何限制,并且可以随时作为包装模型的推断能力,但KDDD可以作为封闭式的推介和演示S级系统增强的系统所有权的升级模型。