We study the accuracy of differentially private mechanisms in the continual release model. A continual release mechanism receives a sensitive dataset as a stream of $T$ inputs and produces, after receiving each input, an accurate output on the obtained inputs. In contrast, a batch algorithm receives the data as one batch and produces a single output. We provide the first strong lower bounds on the error of continual release mechanisms. In particular, for two fundamental problems that are widely studied and used in the batch model, we show that the worst case error of every continual release algorithm is $\tilde \Omega(T^{1/3})$ times larger than that of the best batch algorithm. Previous work shows only a polylogarithimic (in $T$) gap between the worst case error achievable in these two models; further, for many problems, including the summation of binary attributes, the polylogarithmic gap is tight (Dwork et al., 2010; Chan et al., 2010). Our results show that closely related problems -- specifically, those that require selecting the largest of a set of sums -- are fundamentally harder in the continual release model than in the batch model. Our lower bounds assume only that privacy holds for streams fixed in advance (the "nonadaptive" setting). However, we provide matching upper bounds that hold in a model where privacy is required even for adaptively selected streams. This model may be of independent interest.
翻译:我们研究持续释放模式中不同私人机制的准确性。 持续释放机制接收敏感数据集,其输入量为$T(T ⁇ 1/ 3})之流, 其数量比最佳批量算法高一倍。 之前的工作只显示在这两种模式中最坏的差错之间出现多logarithic($T$)差距; 此外, 对于许多问题,包括二元属性的相加,多式差幅很紧( Dwork et al., 2010; Chan et al., 2010) 。 我们的结果显示,每个连续释放算法中最差的差错是$\tilde\Omega (T ⁇ 1/3}), 比最佳批量算法高一倍。 在连续释放模型中,我们最差的差差差($T$$$), 而在连续释放模型中, 我们最低约束的差的差差差数(Droireality) 可以在连续释放模式中为最差的差数 。