Mathematical language in scientific communications and educational scenarios is important yet relatively understudied compared to natural languages. Recent works on mathematical language focus either on representing stand-alone mathematical expressions, especially in their natural tree format, or mathematical reasoning in pre-trained natural language models. Existing works on jointly modeling and generating natural and mathematical languages simply treat mathematical expressions as text, without accounting for the rigid structural properties of mathematical expressions. In this paper, we propose a series of modifications to existing language models to jointly represent and generate text and math: representing mathematical expressions as sequences of node tokens in their operator tree format, using math symbol and tree position embeddings to preserve the semantic and structural properties of mathematical expressions, and using a constrained decoding method to generate mathematically valid expressions. We ground our modifications in GPT-2, resulting in a model MathGPT, and demonstrate that it outperforms baselines on mathematical expression generation tasks.
翻译:与自然语言相比,科学通信和教育情景中的数学语言很重要,但相对而言,研究不足。最近数学语言的著作侧重于表现独立数学表达,特别是自然树形形式的数学表达,或者在经过训练的自然语言模型中进行数学推理。现有的联合建模和生成自然和数学语言的工作仅仅将数学表达作为文本处理,而没有考虑到数学表达的僵硬结构特性。在本文中,我们提议对现有语言模型进行一系列修改,以共同代表并生成文本和数学:在操作者树格式中将数学表达作为节点符号的序列,使用数学符号和树位置嵌入来保护数学表达的语义和结构特性,并使用有限的解码方法生成数学上有效的表达方式。我们在GPT-2中将我们所作的修改划为文本,不考虑数学表达的僵硬结构特性。我们提出了一系列修改,以模型MathGPT为模型,并证明它比数学表达任务的基线要强。