Large pretrained models can be privately fine-tuned to achieve performance approaching that of non-private models. A common theme in these results is the surprising observation that high-dimensional models can achieve favorable privacy-utility trade-offs. This seemingly contradicts known results on the model-size dependence of differentially private convex learning and raises the following research question: When does the performance of differentially private learning not degrade with increasing model size? We identify that the magnitudes of gradients projected onto subspaces is a key factor that determines performance. To precisely characterize this for private convex learning, we introduce a condition on the objective that we term \emph{restricted Lipschitz continuity} and derive improved bounds for the excess empirical and population risks that are dimension-independent under additional conditions. We empirically show that in private fine-tuning of large language models, gradients obtained during fine-tuning are mostly controlled by a few principal components. This behavior is similar to conditions under which we obtain dimension-independent bounds in convex settings. Our theoretical and empirical results together provide a possible explanation for recent successes in large-scale private fine-tuning. Code to reproduce our results can be found at \url{https://github.com/lxuechen/private-transformers/tree/main/examples/classification/spectral_analysis}.
翻译:大型预先培训的模型可以私下微调, 以达到接近非私人模型的性能。 这些结果的一个共同主题是令人惊讶地观察到, 高维模型可以实现有利的隐私- 私利交换。 这似乎与不同私人 convex 学习模式规模依赖性的已知结果相矛盾, 并提出了以下研究问题: 当差异性私人学习的表现不会随着模型规模的扩大而退化时? 我们确定, 预测在子空间上的梯度大小是决定性能的关键因素。 精确地描述私人 convex 学习的特征, 我们引入了一个条件, 即我们用\emph{ 限制 Lipsitz 连续性} 来形容高维度模型, 并针对在额外条件下依赖层面的超强经验和人口风险获得更好的界限。 我们的经验显示, 在大型语言模型的私人微调中, 在微调过程中获得的梯度大多受几个主要组成部分控制。 这种行为类似于我们在 convex 环境中获得维度依赖度约束的条件。 我们的理论和实验结果可以共同为大规模私人光谱/ / tremisional prival pressal pressalalalalal/ aceptraction.