Classical differential private DP-SGD implements individual clipping with random subsampling, which forces a mini-batch SGD approach. We provide a general differential private algorithmic framework that goes beyond DP-SGD and allows any possible first order optimizers (e.g., classical SGD and momentum based SGD approaches) in combination with batch clipping, which clips an aggregate of computed gradients rather than summing clipped gradients (as is done in individual clipping). The framework also admits sampling techniques beyond random subsampling such as shuffling. Our DP analysis follows the $f$-DP approach and introduces a new proof technique based on a slightly {\em stronger} adversarial model which allows us to derive simple closed form expressions and to also analyse group privacy. In particular, for $E$ epochs work and groups of size $g$, we show a $\sqrt{g E}$ DP dependency for batch clipping with shuffling.
翻译:经典的私人差异型DP-SGD用随机子取样法执行个人剪切,这迫使采用小型批次采样法。我们提供了一个超越DP-SGD的通用差异私人算法框架,允许任何可能的第一顺序优化器(如古典SGD和以动力为基础的SGD方法)与批次剪切法相结合,分解法包括计算梯度的总和,而不是(如在单次剪切中所做的那样)剪切梯度。框架还承认除随机子采样技术以外的采样技术,如冲洗。我们的DP分析法遵循$f-DP的方法,并采用基于稍微增强的对抗性模型的新的验证技术,使我们能够形成简单的封闭形式表达方式,并分析群体隐私。特别是,对于以美元计件的工作和规模为g美元的组别,我们展示了用$sqrt{g}DP依赖值的批量剪切法。</s>