This paper studies how to approximate pufferfish privacy when the adversary's prior belief of the published data is Gaussian distributed. Using Monge's optimal transport plan, we show that $(\epsilon, \delta)$-pufferfish privacy is attained if the additive Laplace noise is calibrated to the differences in mean and variance of the Gaussian distributions conditioned on every discriminative secret pair. A typical application is the private release of the summation (or average) query, for which sufficient conditions are derived for approximating $\epsilon$-statistical indistinguishability in individual's sensitive data. The result is then extended to arbitrary prior beliefs trained by Gaussian mixture models (GMMs): calibrating Laplace noise to a convex combination of differences in mean and variance between Gaussian components attains $(\epsilon,\delta)$-pufferfish privacy.
翻译:暂无翻译