We are interested in privatizing an approximate posterior inference algorithm called Expectation Propagation (EP). EP approximates the posterior by iteratively refining approximations to the local likelihoods, and is known to provide better posterior uncertainties than those by variational inference (VI). However, EP needs a large memory to maintain all local approximates associated with each datapoint in the training data. To overcome this challenge, stochastic expectation propagation (SEP) considers a single unique local factor that captures the average effect of each likelihood term to the posterior and refines it in a way analogous to EP. In terms of privacy, SEP is more tractable than EP because at each refining step of a factor, the remaining factors are fixed and do not depend on other datapoints as in EP, which makes the sensitivity analysis straightforward. We provide a theoretical analysis of the privacy-accuracy trade-off in the posterior estimates under our method, called differentially private stochastic expectation propagation (DP-SEP). Furthermore, we demonstrate the performance of our DP-SEP algorithm evaluated on both synthetic and real-world datasets in terms of the quality of posterior estimates at different levels of guaranteed privacy.
翻译:我们有兴趣将近似后推推算法(PEP)私有化。 EP通过对当地可能性的近似近似接近后推算法(EP),据了解,它比(VI)变推法(EP)的近似近似近似近似后推算法(EP)更能提供更佳的后推不确定性。然而,EPP需要大量记忆来保持与培训数据中每个数据点相关的所有当地近似近似值。为了克服这一挑战,SEP(SEP)考虑一个单一的独特本地因素,该因素捕捉每个可能性术语对后推论的平均影响,并以类似于EP的类似方式加以完善。 在隐私方面,SEP比EP更易行,因为在每个因素的精炼阶段,其余因素是固定的,并不取决于EPEP中的其他数据点,这使敏感性分析变得直接。我们从理论角度分析了根据我们的方法对后推估估算的隐私-准确性交易,称为差异性私人随机预期传播(DP-SEP),此外,我们展示了在合成数据质量和实际数据中不同水平上所评估的DP-SEPM-Sloguelogislisal估计数的绩效的绩效。