Software language models have achieved promising results predicting code completion usages, and several industry studies have described successful IDE integrations. Recently, accuracy in autocompletion prediction improved 12.8% from training on a real-world dataset collected from programmers' IDE activity. But what if limited examples of IDE autocompletion in the target programming language are available for model training? In this paper, we investigate the efficacy of pretraining autocompletion models on non-IDE, non-autocompletion, and different-language example code sequences. We find that these unsupervised pretrainings improve model accuracy by over 50% on very small fine-tuning datasets and over 10% on 50k labeled examples. We confirm the real-world impact of these pretrainings in an online setting through A/B testing on thousands of IDE autocompletion users, finding that pretraining is responsible for increases of up to 6.63% autocompletion usage.
翻译:软件语言模型在预测代码完成使用方面已经取得了可喜的成果,一些行业研究描述了成功的 IDE 集成。 最近, 自动完成预测的准确性从从从程序员的 IDE 活动中收集的真实世界数据集培训中提高了12.8%。 但是, 如果为模型培训提供了目标编程语言的IDE自动完成的有限例子呢? 在本文中,我们调查了非IDE、非自动完成和不同语言示例代码序列的预培训自动完成模型的功效。 我们发现,这些未经监督的预培训使非常小的微调数据集模型的准确性提高了50%以上,在50公里标签示例中提高了10%以上。 我们确认,通过A/B测试数千个IDE自动完成用户,这些预培训在网上设置中产生了真实世界的影响,发现预培训导致高达6.63%的自动完成使用率增加。