We consider a sequential setting in which a single dataset of individuals is used to perform adaptively-chosen analyses, while ensuring that the differential privacy loss of each participant does not exceed a pre-specified privacy budget. The standard approach to this problem relies on bounding a worst-case estimate of the privacy loss over all individuals and all possible values of their data, for every single analysis. Yet, in many scenarios this approach is overly conservative, especially for "typical" data points which incur little privacy loss by participation in most of the analyses. In this work, we give a method for tighter privacy loss accounting based on the value of a personalized privacy loss estimate for each individual in each analysis. To implement the accounting method we design a filter for R\'enyi differential privacy. A filter is a tool that ensures that the privacy parameter of a composed sequence of algorithms with adaptively-chosen privacy parameters does not exceed a pre-specified budget. Our filter is simpler and tighter than the known filter for $(\epsilon,\delta)$-differential privacy by Rogers et al. We apply our results to the analysis of noisy gradient descent and show that personalized accounting can be practical, easy to implement, and can only make the privacy-utility tradeoff tighter.
翻译:我们考虑的是使用个人单一数据集进行适应性选择分析的顺序设置,同时确保每个参与者的隐私差异损失不会超过事先规定的隐私预算; 这一问题的标准处理方法取决于对每个人的隐私损失及其数据的所有可能值进行最坏的估计; 然而,在许多情况中,这一方法过于保守,特别是“典型”数据点,因为参加大多数分析很少造成隐私损失。 在这项工作中,我们根据对每个人进行个人化隐私损失估计的价值,对隐私损失进行更严格的核算。 为了实施我们为R\'enyi差异隐私设计过滤器的会计方法。 过滤器是一种工具,确保具有适应性隐私参数的各种算法组合的隐私参数不会超过事先规定的预算。 我们的过滤器比Rogers等人已知的过滤器更简单和紧。 我们将我们的结果应用到对紧张的梯度下降的分析中,并显示个人更加严密的隐私核算只能容易地进行。