Conditional sampling is a fundamental task in Bayesian statistics and generative modeling. Consider the problem of sampling from the posterior distribution $P_{X|Y=y^*}$ for some observation $y^*$, where the likelihood $P_{Y|X}$ is known, and we are given $n$ i.i.d. samples $D=\{X_i\}_{i=1}^n$ drawn from an unknown prior distribution $\pi_X$. Suppose that $f(\hat{\pi}_{X^n})$ is the distribution of a posterior sample generated by an algorithm (e.g. a conditional generative model or the Bayes rule) when $\hat{\pi}_{X^n}$ is the empirical distribution of the training data. Although averaging over the randomness of the training data $D$, we have $\mathbb{E}_D\left(\hat{\pi}_{X^n}\right)= \pi_X$, we do not have $\mathbb{E}_D\left\{f(\hat{\pi}_{X^n})\right\}= f(\pi_X)$ due to the nonlinearity of $f$, leading to a bias. In this paper we propose a black-box debiasing scheme that improves the accuracy of such a naive plug-in approach. For any integer $k$ and under boundedness of the likelihood and smoothness of $f$, we generate samples $\hat{X}^{(1)},\dots,\hat{X}^{(k)}$ and weights $w_1,\dots,w_k$ such that $\sum_{i=1}^kw_iP_{\hat{X}^{(i)}}$ is a $k$-th order approximation of $f(\pi_X)$, where the generation process treats $f$ as a black-box. Our generation process achieves higher accuracy when averaged over the randomness of the training data, without degrading the variance, which can be interpreted as improving memorization without compromising generalization in generative models.
翻译:暂无翻译