Differential privacy (DP) offers strong theoretical privacy guarantees, but implementations of DP mechanisms may be vulnerable to side-channel attacks, such as timing attacks. When sampling methods such as MCMC or rejection sampling are used to implement a mechanism, the runtime can leak privacy. We characterize the additional privacy cost due to the runtime of a rejection sampler in terms of both $(\epsilon,\delta)$-DP as well as $f$-DP. We also show that unless the acceptance probability is constant across databases, the runtime of a rejection sampler does not satisfy $\epsilon$-DP for any $\epsilon$. We show that there is a similar breakdown in privacy with adaptive rejection samplers. We propose three modifications to the rejection sampling algorithm, with varying assumptions, to protect against timing attacks by making the runtime independent of the data. The modification with the weakest assumptions is an approximate sampler, introducing a small increase in the privacy cost, whereas the other modifications give perfect samplers. We also use our techniques to develop an adaptive rejection sampler for log-H\"{o}lder densities, which also has data-independent runtime. We give several examples of DP mechanisms that fit the assumptions of our methods and can thus be implemented using our samplers.
翻译:不同的隐私(DP)提供了强有力的理论隐私保障,但DP机制的实施可能易受侧道攻击(如时间攻击)的伤害。当使用诸如MCMC或拒绝取样等抽样方法来实施机制时,运行时间会泄漏隐私。我们用拒绝采样者运行时间($( epsilon,\delta)-DP)和美元-DP来描述由于拒绝采样者运行时间( $( efsilon,\delta)-DP) 带来的额外隐私成本。我们还表明,除非各数据库的接受概率保持不变,拒绝采样器的运行时间不能满足美元- DP 。当使用适应性拒绝采样器的隐私发生类似故障时,我们用不同的假设对拒绝采样算算算方法提出三处修改,通过让数据运行时间独立来保护时间攻击隐私。最弱的假设是近似的采样器,使隐私成本略有增加,而其他的修改则给予完美的采样员。我们还使用技术来开发对log-H o/epsldden views 提供一些数据独立的方法。