Code generation is a longstanding challenge, aiming to generate a code snippet based on a natural language description. Usually, expensive text-code paired data is essential for training a code generation model. Recently, thanks to the success of pre-training techniques, large language models are trained on large-scale unlabelled code corpora and perform well in code generation. In this paper, we investigate how to leverage an unlabelled code corpus to train a model for library-oriented code generation. Since it is a common practice for programmers to reuse third-party libraries, in which case the text-code paired data are harder to obtain due to the huge number of libraries. We observe that library-oriented code snippets are more likely to share similar code sketches. Hence, we present CERT with two steps: a sketcher generates the sketch, then a generator fills the details in the sketch. Both the sketcher and the generator are continually pre-trained upon a base model using unlabelled data. Furthermore, we craft two benchmarks named PandasEval and NumpyEval to evaluate library-oriented code generation. Experimental results demonstrate the impressive performance of CERT. For example, it surpasses the base model by an absolute 15.67% improvement in terms of pass@1 on PandasEval. Our work is available at https://github.com/microsoft/PyCodeGPT.
翻译:代码生成是一个长期的挑战, 旨在生成基于自然语言描述的代码片断。 通常, 昂贵的文本代码配对数据对于培训代码生成模型至关重要。 最近, 由于培训前技术的成功, 大型语言模型在大型无标签代码公司中接受了培训, 并在代码生成中表现良好 。 在本文中, 我们调查如何利用一个没有标签的代码文件来培训一个面向图书馆的代码生成模型。 由于程序员使用第三方图书馆, 通常需要昂贵的文本代码配对数据来培训代码生成模型 。 我们观察到, 面向图书馆的代码片断更有可能分享类似的代码草图。 因此, 我们向计算机专家展示了两个步骤: 素描者生成了素描, 然后一个发电机填补了代码生成过程的细节 。 素描者和生成者都持续在使用未标签数据的基础模型上接受预先培训 。 此外, 我们设计了两个名为 Pandas Eval 和 NumpyEval 的基准, 来评估面向图书馆的代码生成。 我们观察到了以图书馆为主的代码生成。 试验性的代码片断结果将更可能分享类似的代码草图 。 。 。 。 在 15 ASVERG 1 中, 中, 我们的绝对的成绩将超越了 。 。 。