Bayesian learning via Stochastic Gradient Langevin Dynamics (SGLD) has been suggested for differentially private learning. While previous research provides differential privacy bounds for SGLD when close to convergence or at the initial steps of the algorithm, the question of what differential privacy guarantees can be made in between remains unanswered. This interim region is essential, especially for Bayesian neural networks, as it is hard to guarantee convergence to the posterior. This paper will show that using SGLD might result in unbounded privacy loss for this interim region, even when sampling from the posterior is as differentially private as desired.
翻译:通过Stochastic Gradient Langevin Dynamics(SGLD)进行贝叶斯学习的建议被建议为差别化的私人学习。虽然以前的研究为SGLD提供了在接近趋同或算法初始阶段时的差别隐私界限,但是在两种方法之间可以作出什么区别隐私保障的问题仍然没有答案。这个临时区域至关重要,特别是对巴伊西亚神经网络来说,因为很难保证与后方的趋同。 本文将表明,使用SGLD可能会给这个临时区域造成无限制的隐私损失,即使从后方取样的隐私与预期的差别一样私人。