Generative Flow Networks (GFlowNets) offer a powerful framework for sampling graphs in proportion to their rewards. However, existing approaches suffer from systematic biases due to inaccuracies in state transition probability computations. These biases, rooted in the inherent symmetries of graphs, impact both atom-based and fragment-based generation schemes. To address this challenge, we introduce Symmetry-Aware GFlowNets (SA-GFN), a method that incorporates symmetry corrections into the learning process through reward scaling. By integrating bias correction directly into the reward structure, SA-GFN eliminates the need for explicit state transition computations. Empirical results show that SA-GFN enables unbiased sampling while enhancing diversity and consistently generating high-reward graphs that closely match the target distribution.
翻译:生成流网络为按奖励比例采样图结构提供了强大框架。然而,由于状态转移概率计算的不准确性,现有方法存在系统性偏差。这些源于图结构内在对称性的偏差,同时影响着基于原子和基于片段的生成方案。为解决这一挑战,我们提出对称感知生成流网络,该方法通过奖励缩放将对称性校正纳入学习过程。通过将偏差校正直接整合到奖励结构中,SA-GFN 无需显式计算状态转移概率。实验结果表明,SA-GFN 能够实现无偏采样,同时增强多样性,并持续生成与目标分布高度匹配的高奖励图结构。