Premise selection is crucial for large theory reasoning as the sheer size of the problems quickly leads to resource starvation. This paper proposes a premise selection approach inspired by the domain of image captioning, where language models automatically generate a suitable caption for a given image. Likewise, we attempt to generate the sequence of axioms required to construct the proof of a given problem. This is achieved by combining a pre-trained graph neural network with a language model. We evaluated different configurations of our method and experience a 17.7% improvement gain over the baseline.
翻译:----
前提选择对于大规模理论推理非常关键,因为问题的规模很快就会导致资源匮乏。本文提出了一种受到图像字幕领域启发的前提选择方法,在该领域中,语言模型自动为给定的图像生成适当的字幕。同样,我们试图生成构建给定问题证明所需的公理序列。这是通过将预训练的图神经网络与语言模型相结合来实现的。我们评估了我们方法的不同配置,并在基线上获得了17.7%的提高。