Protecting large language models from privacy leakage is becoming increasingly crucial with their wide adoption in real-world products. Yet applying differential privacy (DP), a canonical notion with provable privacy guarantees for machine learning models, to those models remains challenging due to the trade-off between model utility and privacy loss. Utilizing the fact that sensitive information in language data tends to be sparse, Shi et al. (2021) formalized a DP notion extension called Selective Differential Privacy (SDP) to protect only the sensitive tokens defined by a policy function. However, their algorithm only works for RNN-based models. In this paper, we develop a novel framework, Just Fine-tune Twice (JFT), that achieves SDP for state-of-the-art large transformer-based models. Our method is easy to implement: it first fine-tunes the model with redacted in-domain data, and then fine-tunes it again with the original in-domain data using a private training mechanism. Furthermore, we study the scenario of imperfect implementation of policy functions that misses sensitive tokens and develop systematic methods to handle it. Experiments show that our method achieves strong utility compared to previous baselines. We also analyze the SDP privacy guarantee empirically with the canary insertion attack.
翻译:保护大型语言模型不受隐私泄漏的大规模语言模型正在变得日益关键,因为它们被广泛采用在现实世界的产品中。然而,由于模型效用和隐私损失之间的权衡,将不同隐私(DP)这个带有机器学习模型可辨别隐私保障的典型概念(DP)应用到这些模型仍然具有挑战性。利用语言数据中的敏感信息往往稀少这一事实,Shi等人(2021年)正式确定了称为选择性差异隐私(SDP)的DP概念扩展,以仅保护政策功能定义的敏感符号。然而,它们的算法只对基于RNN的模型起作用。在本文中,我们开发了一个新颖的框架,即Just Fine-tune Twur(JFT),为最先进的大型变压器基于模型实现SDP。我们的方法很容易实施:首先对模型进行微调,用重新激活的内置数据,然后用私人培训机制对原始的内置数据进行微调。此外,我们只研究政策功能执行不完善的情景,而错过敏感标志,并开发系统处理它的方法。实验显示我们的方法也能够对以前的空间攻击进行有力的工具化。