Computational differential privacy (CDP) is a natural relaxation of the standard notion of (statistical) differential privacy (SDP) proposed by Beimel, Nissim, and Omri (CRYPTO 2008) and Mironov, Pandey, Reingold, and Vadhan (CRYPTO 2009). In contrast to SDP, CDP only requires privacy guarantees to hold against computationally-bounded adversaries rather than computationally-unbounded statistical adversaries. Despite the question being raised explicitly in several works (e.g., Bun, Chen, and Vadhan, TCC 2016), it has remained tantalizingly open whether there is any task achievable with the CDP notion but not the SDP notion. Even a candidate such task is unknown. Indeed, it is even unclear what the truth could be! In this work, we give the first construction of a task achievable with the CDP notion but not the SDP notion, under the following strong but plausible cryptographic assumptions: (1) Non-Interactive Witness Indistinguishable Proofs, (2) Laconic Collision-Resistant Keyless Hash Functions, (3) Differing-Inputs Obfuscation for Public-Coin Samplers. In particular, we construct a task for which there exists an $\varepsilon$-CDP mechanism with $\varepsilon = O(1)$ achieving $1-o(1)$ utility, but any $(\varepsilon, \delta)$-SDP mechanism, including computationally-unbounded ones, that achieves a constant utility must use either a super-constant $\varepsilon$ or an inverse-polynomially large $\delta$. To prove this, we introduce a new approach for showing that a mechanism satisfies CDP: first we show that a mechanism is "private" against a certain class of decision tree adversaries, and then we use cryptographic constructions to "lift" this into privacy against computationally bounded adversaries. We believe this approach could be useful to devise further tasks separating CDP from SDP.
翻译:暂无翻译