In learning-assisted theorem proving, one of the most critical challenges is to generalize to theorems unlike those seen at training time. In this paper, we introduce INT, an INequality Theorem proving benchmark, specifically designed to test agents' generalization ability. INT is based on a procedure for generating theorems and proofs; this procedure's knobs allow us to measure 6 different types of generalization, each reflecting a distinct challenge characteristic to automated theorem proving. In addition, unlike prior benchmarks for learning-assisted theorem proving, INT provides a lightweight and user-friendly theorem proving environment with fast simulations, conducive to performing learning-based and search-based research. We introduce learning-based baselines and evaluate them across 6 dimensions of generalization with the benchmark. We then evaluate the same agents augmented with Monte Carlo Tree Search (MCTS) at test time, and show that MCTS can help to prove new theorems.
翻译:在学习辅助理论的验证中,最关键的挑战之一是与培训时所看到的情况不同,对理论进行概括化分析。在本文中,我们引入了INT,即一个INT,一个专门用来测试代理人一般化能力的IN不平等理论检验基准。INT基于一个生成理论和证明的程序;这个程序的把手使我们能够测量6种不同类型的概括化,每个类型都反映了自动理论检验的特征。此外,与以往的学习辅助理论检验基准不同,INT提供一种轻量级和方便用户的理论检验环境,快速模拟,有利于进行基于学习和搜索的研究。我们引入基于学习的基线,并用基准的6个层面来评估这些基准。然后我们评估测试时与蒙特卡洛树搜索(MCTS)相加的相同的特征,并表明 MCTS能够帮助证明新的理论。