We present a meta-algorithm for learning a posterior-inference algorithm for restricted probabilistic programs. Our meta-algorithm takes a training set of probabilistic programs that describe models with observations, and attempts to learn an efficient method for inferring the posterior of a similar program. A key feature of our approach is the use of what we call a white-box inference algorithm that extracts information directly from model descriptions themselves, given as programs. Concretely, our white-box inference algorithm is equipped with multiple neural networks, one for each type of atomic command, and computes an approximate posterior of a given probabilistic program by analysing individual atomic commands in the program using these networks. The parameters of the networks are learnt from a training set by our meta-algorithm. We empirically demonstrate that the learnt inference algorithm generalises well to programs that are new in terms of both parameters and model structures, and report cases where our approach achieves greater test-time efficiency than alternative approaches such as HMC. The overall results show the promise as well as remaining challenges of our approach.
翻译:我们为有限的概率程序提出了一个元值算法,用于学习后置推论法。 我们的元值算法为有限的概率程序提供了一个元数。 我们的元值算法为一组描述模型的概率程序,并试图学习一种有效的方法来推断类似方案的后置程序。 我们的方法的一个重要特征是使用一种我们称之为白箱推论算法,直接从模型描述本身中提取信息,作为程序提供。 具体地说,我们的白箱推论算法配备了多种神经网络,每类原子指令一个,并用这些网络分析程序中的个别原子指令来计算出一个给定概率方案的近似后置法。 网络参数是从我们元值算法设定的培训中学习的。 我们的经验证明,所学的推论算法通则在参数和模型结构方面都是新的程序,并报告了我们的方法比HMC等替代方法更具有测试-时间效率的案例。 总体结果显示,我们的方法还有希望,我们的方法也是仍然存在的挑战。