The exponential increase in the amount of available data makes taking advantage of them without violating users' privacy one of the fundamental problems of computer science. This question has been investigated thoroughly under the framework of differential privacy. However, most of the literature has not focused on settings where the amount of data is so large that we are not even able to compute the exact answer in the non-private setting (such as in the streaming setting, sublinear-time setting, etc.). This can often make the use of differential privacy unfeasible in practice. In this paper, we show a general approach for making Monte-Carlo randomized approximation algorithms differentially private. We only need to assume the error $R$ of the approximation algorithm is sufficiently concentrated around $0$ (e.g.\ $\mathbb{E}[|R|]$ is bounded) and that the function being approximated has a small global sensitivity $\Delta$. Specifically, if we have a randomized approximation algorithm with sufficiently concentrated error which has time/space/query complexity $T(n,\rho)$ with $\rho$ being an accuracy parameter, we can generally speaking get an algorithm with the same accuracy and complexity $T(n,\Theta(\epsilon \rho))$ that is $\epsilon$-differentially private.
翻译:暂无翻译