Generative flow networks (GFlowNets) are amortized variational inference algorithms that are trained to sample from unnormalized target distributions over compositional objects. A key limitation of GFlowNets until this time has been that they are restricted to discrete spaces. We present a theory for generalized GFlowNets, which encompasses both existing discrete GFlowNets and ones with continuous or hybrid state spaces, and perform experiments with two goals in mind. First, we illustrate critical points of the theory and the importance of various assumptions. Second, we empirically demonstrate how observations about discrete GFlowNets transfer to the continuous case and show strong results compared to non-GFlowNet baselines on several previously studied tasks. This work greatly widens the perspectives for the application of GFlowNets in probabilistic inference and various modeling settings.
翻译:生成源流网络(GFlowNets)是一种摊销式变位法算法,经过培训,可以对组成对象的未经规范的目标分布进行抽样。直到这一时期,GFlowNets的关键限制是,它们局限于离散的空间。我们提出了通用的GFlowNets理论,它既包括现有的离散的GFlowNets,也包括连续或混合的状态空间,并且进行有两个目标的实验。首先,我们说明了理论的要点和各种假设的重要性。第二,我们从经验上展示了对离散的GFlowNets向连续案例转移的观察,并展示了与以前研究的若干任务非GFlowNet基线相比的有力结果。这项工作极大地扩大了在概率推论和各种建模环境中应用GFlowNets的观点。