We consider the bandit optimization problem with the reward function defined over graph-structured data. This problem has important applications in molecule design and drug discovery, where the reward is naturally invariant to graph permutations. The key challenges in this setting are scaling to large domains, and to graphs with many nodes. We resolve these challenges by embedding the permutation invariance into our model. In particular, we show that graph neural networks (GNNs) can be used to estimate the reward function, assuming it resides in the Reproducing Kernel Hilbert Space of a permutation-invariant additive kernel. By establishing a novel connection between such kernels and the graph neural tangent kernel (GNTK), we introduce the first GNN confidence bound and use it to design a phased-elimination algorithm with sublinear regret. Our regret bound depends on the GNTK's maximum information gain, which we also provide a bound for. While the reward function depends on all $N$ node features, our guarantees are independent of the number of graph nodes $N$. Empirically, our approach exhibits competitive performance and scales well on graph-structured domains.
翻译:我们用图表结构数据定义的奖励功能来考虑土匪优化问题。 这个问题在分子设计和药物发现中有着重要的应用, 分子设计和药物发现中, 奖赏是自然的, 自然的对图形变异性。 在这种环境下, 关键的挑战是向大域缩放, 以及用多个节点的图形。 我们通过将变异性嵌入我们的模型来解决这些挑战。 特别是, 我们显示, 图形神经网络( GNNN) 可以用来估算奖赏功能, 假设它位于产出 Kernel Hilbert 空间, 属于一个变异性添加的添加内核。 通过在这些内核和图形神经内核( GNTK)之间建立新的联系, 我们引入了第一个 GNNN 信任约束, 并使用它来设计一个带有子线悔的分阶段消化算法。 我们的遗憾取决于 GNTK 的最大信息收益, 我们也提供一定的。 虽然奖赏功能取决于所有$ 节点的特性, 我们的保证与图表节点数无关, 我们的保证是独立的。 Empirticly, 我们的方法有竞争性的表级表现和比例。