Large Language Models (LLM) are a new class of computation engines, "programmed" via prompt engineering. We are still learning how to best "program" these LLMs to help developers. We start with the intuition that developers tend to consciously and unconsciously have a collection of semantics facts in mind when working on coding tasks. Mostly these are shallow, simple facts arising from a quick read. For a function, examples of facts might include parameter and local variable names, return expressions, simple pre- and post-conditions, and basic control and data flow, etc. One might assume that the powerful multi-layer architecture of transformer-style LLMs makes them inherently capable of doing this simple level of "code analysis" and extracting such information, implicitly, while processing code: but are they, really? If they aren't, could explicitly adding this information help? Our goal here is to investigate this question, using the code summarization task and evaluate whether automatically augmenting an LLM's prompt with semantic facts explicitly, actually helps. Prior work shows that LLM performance on code summarization benefits from few-shot samples drawn either from the same-project or from examples found via information retrieval methods (such as BM25). While summarization performance has steadily increased since the early days, there is still room for improvement: LLM performance on code summarization still lags its performance on natural-language tasks like translation and text summarization. We find that adding semantic facts actually does help! This approach improves performance in several different settings suggested by prior work, including for two different Large Language Models. In most cases, improvement nears or exceeds 2 BLEU; for the PHP language in the challenging CodeSearchNet dataset, this augmentation actually yields performance surpassing 30 BLEU.
翻译:大型语言模型是一种新的计算引擎,通过提示工程进行“编程”。我们仍在学习如何最好地“编写”这些大型语言模型以帮助开发人员。我们从这样的直觉出发,即开发人员在处理编程任务时往往会有意识和无意识地记忆一些语义信息。大多数情况下,这些语义信息是浅显的、简单的事实,是在快速阅读代码时产生的。对于一个函数而言,这些事实的例子可以包括参数和局部变量名称、返回表达式、简单的前置条件和后置条件以及基本的控制和数据流等。人们可能会假设,变压器样式的大型语言模型的强大多层结构使它们在处理代码时天生能够进行这种简单级别的“代码分析”并提取这些信息:但是,它们真的能够做到吗?如果不能,明确添加这些信息是否有助于提高性能?我们的目标是调查这个问题,并使用代码概述任务评估自动将语义事实与LLM的提示明确结合是否有帮助。先前的研究表明,LLM在代码概述方面的性能受益于从同一项目中获取的few-shot样本或通过信息检索方法(例如BM25)找到的示例。尽管自早期以来对概述性能的提高已经不断增加,但是在代码概述上,LLM的性能仍落后于其在翻译和文本概括等自然语言任务上的性能。我们发现,添加语义事实确实有帮助!这种方法改善了先前工作所建议的几种不同设置下的性能,包括两种不同的大型语言模型。在大多数情况下,改善效果接近或超过了2 BLEU;对于PHP语言在具有挑战性的CodeSearchNet数据集中,这种改进实际上产生了超过30 BLEU的性能。