High quality bugs are key to training the next generation of language model based software engineering (SWE) agents. We introduce a novel method for synthetic generation of difficult and diverse bugs. Our method instructs SWE Agents to introduce a feature into the codebase whereby they may unintentionally break tests, resulting in bugs. Prior approaches often induce an out-of-distribution effect by generating bugs intentionally (e.g. by introducing local perturbation to existing code), which does not reflect realistic development processes. We perform qualitative analysis to demonstrate that our approach for generating bugs more closely reflects the patterns found in human-authored edits. Through extensive experiments, we demonstrate that our bugs provide more efficient training data for supervised fine-tuning, outperforming other bug datasets by 2% with half the training data (1.2k vs. 3k bugs). We train on our newly generated bugs in addition to existing bug datasets to get FrogBoss a state-of-the-art 32B parameter model on SWE-bench Verified with a pass@1 of 54.6% and FrogMini a state-of-the-art 14B model on SWE-bench Verified with a pass@1 of 45.3% on SWE-bench Verified averaged over three seeds.
翻译:高质量缺陷是训练下一代基于语言模型的软件工程(SWE)智能体的关键。我们提出了一种新颖的合成生成困难且多样化缺陷的方法。该方法指导SWE智能体向代码库中引入新功能,在此过程中它们可能无意中破坏测试,从而产生缺陷。先前的方法通常通过有意生成缺陷(例如,通过对现有代码进行局部扰动)来引发分布外效应,这不能反映真实的开发过程。我们进行了定性分析,以证明我们的缺陷生成方法更贴近人类编写编辑中发现的模式。通过大量实验,我们证明我们的缺陷为监督微调提供了更高效的训练数据,仅用一半的训练数据(1.2k vs. 3k个缺陷)即可超越其他缺陷数据集2%的性能。我们在新生成的缺陷以及现有缺陷数据集上进行训练,获得了FrogBoss——一个在SWE-bench Verified上达到最先进水平的320亿参数模型,其pass@1为54.6%;以及FrogMini——一个在SWE-bench Verified上达到最先进水平的140亿模型,其在SWE-bench Verified上三次随机种子平均的pass@1为45.3%。