In this paper, we consider the framework of privacy amplification via iteration, which is originally proposed by Feldman et al. and subsequently simplified by Asoodeh et al. in their analysis via the contraction coefficient. This line of work focuses on the study of the privacy guarantees obtained by the projected noisy stochastic gradient descent (PNSGD) algorithm with hidden intermediate updates. A limitation in the existing literature is that only the early stopped PNSGD has been studied, while no result has been proved on the more widely-used PNSGD applied on a shuffled dataset. Moreover, no scheme has been yet proposed regarding how to decrease the injected noise when new data are received in an online fashion. In this work, we first prove a privacy guarantee for shuffled PNSGD, which is investigated asymptotically when the noise is fixed for each sample size $n$ but reduced at a predetermined rate when $n$ increases, in order to achieve the convergence of privacy loss. We then analyze the online setting and provide a faster decaying scheme for the magnitude of the injected noise that also guarantees the convergence of privacy loss.
翻译:在本文中,我们考虑了通过迭代扩大隐私框架,该框架最初由Feldman等人提出,后来由Asoodeh等人在通过收缩系数进行分析时加以简化。这一工作重点是研究预测的噪音随机梯度梯度下降算法(PNSGD)获得的隐私保障,并进行隐藏的中间更新。现有文献中的一项限制是,只研究了早期停止的PNSGD, 而对于在被洗掉的数据集中应用的更为广泛使用的PNSGD没有结果。此外,对于在网上收到新数据时如何减少注入的噪音,尚未提出任何计划。在这项工作中,我们首先证明,当每个样本大小的噪音固定为美元,但当增加美元时以预先设定的速度减少,将之视为一种隐私损失的趋同。我们随后分析了在线设置,并对注射噪音的规模提供了一种加速衰减计划,这也保证了隐私损失的趋同。