The widespread use of spreadsheet environments by billions of users presents a unique opportunity for formula-authoring assistance. Although large language models, such as Codex, can assist in general-purpose languages, they are expensive to train and challenging to deploy due to their large model sizes (up to billions of parameters). Moreover, they require hundreds of gigabytes of training data. We present FLAME, a T5-based model trained on Excel formulas that leverages domain insights to achieve competitive performance with a substantially smaller model (60M parameters) and two orders of magnitude less training data. We curate a training dataset using sketch deduplication, introduce an Excel-specific formula tokenizer for our model, and use domain-specific versions of masked span prediction and noisy auto-encoding as pretraining objectives. We evaluate FLAME on formula repair, formula auto-completion, and a novel task called syntax reconstruction. FLAME (60M) can outperform much larger models, such as Codex-Davinci (175B), Codex-Cushman (12B), and CodeT5 (220M), in 6 out of 10 settings.
翻译:数十亿用户广泛使用电子表格环境为提供公式援助提供了一个独特的机会,尽管诸如代码编码等大型语言模型可以协助通用语言,但由于模型大小大(可达数十亿项参数),培训成本很高,而且难以部署,还需要数百千兆字节的培训数据。我们介绍了FLAME,一个以Excel公式为主的T5型模型,它利用领域见解,以大大小得多的模型(60M参数)和两个数量级较低的培训数据实现竞争性性能。我们用素描解调一套培训数据集,为我们的模型引入一个Excel专用公式符号,并使用特定域版的蒙面预测和吵闹的自动编码作为培训前目标。我们在10个设置的6个设置中,我们用公式修理、公式自动完成和称为合成法重建的新任务评估FLAME。FLAM(60M)可以超越大得多的模型,例如代码x-达文西(175B)、代码-库什曼(12B)和代码T5(220M)等。