Recently, particle-based variational inference (ParVI) methods have gained interest because they can avoid arbitrary parametric assumptions that are common in variational inference. However, many ParVI approaches do not allow arbitrary sampling from the posterior, and the few that do allow such sampling suffer from suboptimality. This work proposes a new method for learning to approximately sample from the posterior distribution. We construct a neural sampler that is trained with the functional gradient of the KL-divergence between the empirical sampling distribution and the target distribution, assuming the gradient resides within a reproducing kernel Hilbert space. Our generative ParVI (GPVI) approach maintains the asymptotic performance of ParVI methods while offering the flexibility of a generative sampler. Through carefully constructed experiments, we show that GPVI outperforms previous generative ParVI methods such as amortized SVGD, and is competitive with ParVI as well as gold-standard approaches like Hamiltonian Monte Carlo for fitting both exactly known and intractable target distributions.
翻译:最近,基于粒子的变推法(ParVI)方法引起了人们的兴趣,因为它们可以避免在变推法中常见的任意参数假设。然而,许多ParVI方法不允许从后方任意采样,而能够进行这种采样的少数方法则存在亚优性。这项工作提出了一种从后方分布中学习大约样本的新方法。我们建造了一个神经采样器,在实验采样分布和目标分布之间使用KL-引力的功能梯度进行训练,假设梯度存在于再生产核心内尔·希尔伯特空间内。我们的基因化ParVI(GPVI)方法保持ParVI方法的静态性能,同时提供基因化采样器的灵活性。我们通过精心制作的实验,显示GPVI比先前的基因化PAVGD等遗传性PAVGD方法高一些,并且与ParVI具有竞争力,而且与Hiltonian Monte Carlo等金标准方法具有竞争力,既适合已知的,又棘手的目标分布。