We present a meta-algorithm for learning a posterior-inference algorithm for restricted probabilistic programs. Our meta-algorithm takes a training set of probabilistic programs that describe models with observations, and attempts to learn an efficient method for inferring the posterior of a similar program. A key feature of our approach is the use of what we call a white-box inference algorithm that extracts information directly from model descriptions themselves, given as programs. Concretely, our white-box inference algorithm is equipped with multiple neural networks, one for each type of atomic command, and computes an approximate posterior of a given probabilistic program by analysing individual atomic commands in the program using these networks. The parameters of these networks are then learnt from a training set by our meta-algorithm. We empirically demonstrate that the learnt inference algorithm generalises well to unseen programs in terms of both interpolation and extrapolation, and report cases where our approach may be preferable to a state-of-the-art inference algorithm such as HMC. The overall results show the promise as well as remaining challenges of our approach.
翻译:我们为有限的概率方案提出了一个元值算法。 我们的元值算法为有限的概率方案提供了一个元数,用于学习子宫外推价算法。 我们的元值算法为一组描述模型的概率方案的培训,这些模型用观察来描述模型,并试图学习一种有效的方法来推断类似方案的子宫外演算法。 我们的方法的一个重要特征是使用一种我们称之为白箱推价算法的方法,这种算法直接从模型描述本身中提取信息,以作为程序提供。 具体地说,我们的白箱推法算法配备了多种神经网络,每种原子指令都有一个,并且通过利用这些网络分析程序中的个别原子命令来计算一个给定的概率方案的近似外推法。 这些网络的参数随后是从我们元值会计法设定的培训中学习的。 我们的经验证明,所学的推算法通则在内推法和外推法方面,以及报告我们的方法可能优于诸如HMC这样的状态推论方法。 总体结果显示,我们所持的预示力是,作为HMC这样的方法。