This paper establishes the equivalence between Local Differential Privacy (LDP) and a global limit on learning any knowledge about an object. However, an output from an LDP query is not necessarily required to provide exact amount of knowledge equal to the upper bound of the learning limit. Since the amount of knowledge gain should be proportional to the incurred privacy loss, the traditional approach of using DP guarantee to measure privacy loss can occasionally overestimate the actual privacy loss. This is especially problematic in privacy accounting in LDP, where privacy loss is computed by summing the DP guarantees (basic composition). To address this issue, this paper introduces the concept of realized privacy loss, which measures the actual knowledge gained by the analyst after a query, as a more accurate measure of privacy loss. The realized privacy loss is then integrated into the privacy accounting of fully adaptive composition, where an adversary adaptively selects queries based on previous results. The Bayesian Privacy Filter is implemented to ensure that the realized privacy loss of the composed queries eventually reaches the DP guarantee, allowing the full utilization of the privacy budget assigned to a queried object. Furthermore, this paper introduces the Bayesian Privacy Odometer to measure realized privacy loss in fully adaptive composition. Experimental evaluations are conducted to assess the efficiency of the Bayesian Privacy Filter, demonstrating that the corresponding composition can accept arbitrarily more queries than the basic composition when the composed queries have sufficiently small DP guarantees. Conversely, this paper concludes, through experiments, that when estimating the histogram of a group of objects with the same privacy budget, an analyst should prefer using a single randomized response over a composition managed by the Bayesian Privacy Filter.
翻译:暂无翻译