Large pretrained models can be privately fine-tuned to achieve performance approaching that of non-private models. A common theme in these results is the surprising observation that high-dimensional models can achieve favorable privacy-utility trade-offs. This seemingly contradicts known results on the model-size dependence of differentially private convex learning and raises the following research question: When does the performance of differentially private learning not degrade with increasing model size? We identify that the magnitudes of gradients projected onto subspaces is a key factor that determines performance. To precisely characterize this for private convex learning, we introduce a condition on the objective that we term restricted Lipschitz continuity and derive improved bounds for the excess empirical and population risks that are dimension-independent under additional conditions. We empirically show that in private fine-tuning of large language models, gradients evaluated near a local optimum are mostly controlled by a few principal components. This behavior is similar to conditions under which we obtain dimension-independent bounds in convex settings. Our theoretical and empirical results together provide a possible explanation for recent successes in large-scale private fine-tuning.
翻译:大型预先培训的模型可以进行私人微调,以达到接近非私人模型的性能。这些结果的一个共同主题是令人惊讶的观察,即高维模型可以实现有利的私隐效用权衡。这似乎与不同私人锥形学习模型规模依赖性的已知结果相矛盾,并提出了以下研究问题:当差异性私人学习的性能不随着模型规模的扩大而退化时?我们确定,在子空间上预测的梯度的大小是决定性能的关键因素。为了准确地说明私人锥形学习的特性,我们引入了一个条件,即我们用限制Lipschitz的连续性,并改进在额外条件下不依赖尺寸的超强经验和人口风险的界限。我们从经验上表明,在大型语言模型的私人微调中,对接近当地最佳的梯度的评价大多由几个主要组成部分控制。这种行为类似于我们在convex环境中获得多维独立界限的条件。我们理论和经验结果共同为最近大规模私人微调的成功提供了可能的解释。