Probabilistic programs provide an expressive representation language for generative models. Given a probabilistic program, we are interested in the task of posterior inference: estimating a latent variable given a set of observed variables. Existing techniques for inference in probabilistic programs often require choosing many hyper-parameters, are computationally expensive, and/or only work for restricted classes of programs. Here we formulate inference as masked language modeling: given a program, we generate a supervised dataset of variables and assignments, and randomly mask a subset of the assignments. We then train a neural network to unmask the random values, defining an approximate posterior distribution. By optimizing a single neural network across a range of programs we amortize the cost of training, yielding a "foundation" posterior able to do zero-shot inference for new programs. The foundation posterior can also be fine-tuned for a particular program and dataset by optimizing a variational inference objective. We show the efficacy of the approach, zero-shot and fine-tuned, on a benchmark of STAN programs.
翻译:概率方案为基因模型提供了一种直观的描述语言。 在一个概率方案下, 我们感兴趣的是事后推断任务: 根据一组观察到的变量来估计潜伏变量。 概率方案的现有推断技术往往需要选择许多超参数, 计算成本很高, 并且( 或者) 只能为有限的程序类别工作 。 在这里, 我们将推断作为隐蔽语言模型: 给一个程序, 我们生成一个受监督的变量和任务数据集, 并随机掩盖其中的一部分任务 。 然后我们训练一个神经网络来解开随机值, 定义一个近似的远地点分布 。 通过优化一个单一的神经网络, 我们通过在一系列方案上对培训成本进行摊合, 产生能够对新程序进行零发错的“ 基金会” 后视镜。 基金会的后视镜也可以通过优化变率目标, 来微调某个特定的程序和数据集。 我们展示了方法的功效, 零射和微调整,, 在STAN 方案的基准上, 。