Large-scale pre-trained models like BERT, have obtained a great success in various Natural Language Processing (NLP) tasks, while it is still a challenge to adapt them to the math-related tasks. Current pre-trained models neglect the structural features and the semantic correspondence between formula and its context. To address these issues, we propose a novel pre-trained model, namely \textbf{MathBERT}, which is jointly trained with mathematical formulas and their corresponding contexts. In addition, in order to further capture the semantic-level structural features of formulas, a new pre-training task is designed to predict the masked formula substructures extracted from the Operator Tree (OPT), which is the semantic structural representation of formulas. We conduct various experiments on three downstream tasks to evaluate the performance of MathBERT, including mathematical information retrieval, formula topic classification and formula headline generation. Experimental results demonstrate that MathBERT significantly outperforms existing methods on all those three tasks. Moreover, we qualitatively show that this pre-trained model effectively captures the semantic-level structural information of formulas. To the best of our knowledge, MathBERT is the first pre-trained model for mathematical formula understanding.
翻译:在各种自然语言处理(NLP)任务中,如BERT等大规模预先培训的模型在各种自然语言处理(NLP)任务中取得了巨大成功,而使其适应与数学有关的任务仍然是一项挑战。目前预先培训的模型忽视了公式及其上下文的结构特征和语义对应关系。为了解决这些问题,我们提议了一个新的预培训模型,即\ textbf{MathBERT},该模型由数学公式及其相应背景共同培训。此外,为了进一步捕捉公式的语义级结构特征,设计了新的培训前任务,以预测从操作者树(OPT)中提取出来的隐藏的公式子结构结构结构,这是公式的语义结构代表。我们为评估数学-BERT的绩效,包括数学信息检索、公式主题分类和公式头列生成,进行了各种下游任务实验。实验结果显示,数学-生物伦理学前模型有效地捕捉到公式的语义级结构信息。对于数学前的数学模型来说,是最好的模型。