Language generation models have been an increasingly powerful enabler for many applications. Many such models offer free or affordable API access, which makes them potentially vulnerable to model extraction attacks through distillation. To protect intellectual property (IP) and ensure fair use of these models, various techniques such as lexical watermarking and synonym replacement have been proposed. However, these methods can be nullified by obvious countermeasures such as "synonym randomization". To address this issue, we propose GINSEW, a novel method to protect text generation models from being stolen through distillation. The key idea of our method is to inject secret signals into the probability vector of the decoding steps for each target token. We can then detect the secret message by probing a suspect model to tell if it is distilled from the protected one. Experimental results show that GINSEW can effectively identify instances of IP infringement with minimal impact on the generation quality of protected APIs. Our method demonstrates an absolute improvement of 19 to 29 points on mean average precision (mAP) in detecting suspects compared to previous methods against watermark removal attacks.
翻译:语言生成模型对许多应用来说越来越强大。 许多这类模型提供了免费或负担得起的 API 访问,使得它们有可能通过蒸馏而容易受到模型抽取攻击。为了保护知识产权并确保这些模型的公平使用,已经提出了各种技术,例如水标记法和同义词替换法等,但是,这些方法可以被“同义词随机化”等明显的反措施所抵消。为了解决这一问题,我们提议GINSEW,这是保护文本生成模型不因蒸馏而被盗的一种新颖方法。我们方法的关键思想是将秘密信号注入每个目标标志解码步骤的概率矢量。然后我们可以通过检测一个可疑模型来发现秘密信息,以判断它是否从受保护的模型中蒸馏出来。实验结果表明,GINSEW能够有效地查明对受保护的API的生成质量影响最小的知识产权侵权情况。我们的方法表明,在侦查嫌疑人时,平均精确度(mAP)比以往的去除水标记攻击方法明显提高19至29个百分点。