Protecting privacy in learning while maintaining the model performance has become increasingly critical in many applications that involve sensitive data. A popular private learning framework is differentially private learning composed of many privatized gradient iterations by noising and clipping. Under the privacy constraint, it has been shown that the dynamic policies could improve the final iterate loss, namely the quality of published models. In this talk, we will introduce these dynamic techniques for learning rate, batch size, noise magnitude and gradient clipping. Also, we discuss how the dynamic policy could change the convergence bounds which further provides insight of the impact of dynamic methods.
翻译:在涉及敏感数据的许多应用中,保护学习隐私同时维护模型性能已变得日益重要。流行的私人学习框架是由许多私有化的梯度迭代组成的有差别的私人学习框架,通过节点和剪切。在隐私限制下,已经表明动态政策可以改善最终迭代损失,即已公布的模型的质量。在这次演讲中,我们将采用这些动态技术,用于学习率、批量大小、噪音大小和梯度剪切。此外,我们还讨论了动态政策如何改变趋同界限,进一步揭示动态方法的影响。