Publicly available source-code libraries are continuously growing and changing. This makes it impossible for models of code to keep current with all available APIs by simply training these models on existing code repositories. Thus, existing models inherently cannot generalize to using unseen functions and libraries, because these would never appear in the training data. In contrast, when human programmers use functions and libraries for the first time, they frequently refer to textual resources such as code manuals and documentation, to explore and understand the available functionality. Inspired by this observation, we introduce DocPrompting: a natural-language-to-code generation approach that explicitly leverages documentation by (1) retrieving the relevant documentation pieces given an NL intent, and (2) generating code based on the NL intent and the retrieved documentation. DocPrompting is general: it can be applied to any programming language and is agnostic to the underlying neural model. We demonstrate that DocPrompting consistently improves NL-to-code models: DocPrompting improves strong base models such as CodeT5 by 2.85% in pass@1 (52% relative gain) and 4.39% in pass@10 (30% relative gain) in execution-based evaluation on the popular Python CoNaLa benchmark; on a new Bash dataset tldr, DocPrompting improves CodeT5 and GPT-Neo1.3B by up to absolute 6.9% exact match.
翻译:公开的源码图书馆正在不断增长和变化。 这使得代码模型无法通过在现有的代码库中培训这些模型来保持与所有可用的 API 的同步。 因此, 现有的模型本质上无法概括使用未知功能和图书馆, 因为这些功能和图书馆永远不会出现在培训数据中。 相比之下, 当人类程序员首次使用功能和图书馆时, 它们经常引用像代码手册和文档这样的文本资源, 以探索和理解可用的功能。 受此观察的启发, 我们引入了 DocPrompting: 一种自然语言到代码生成方法, 明确利用文档:(1) 以 NL 意向重新检索相关文档的绝对语言到代码部分, (2) 根据 NL 意向和回收的文档生成代码。 DocPrompting是一般性的: 它可以适用于任何编程语言, 并且对基本神经模型具有不可知性。 我们证明 DocPrompting 不断改进 NL 到代码模型: DocPrompting 改进了强大的基础模型, 如 CodT5 以2.85 % (5 % 相对增益) 和 4.39% (Prence10) 在B 执行中改进一个基于 CLODSB 相对收益 。