Deep Learning (DL) library bugs affect downstream DL applications, emphasizing the need for reliable systems. Generating valid input programs for fuzzing DL libraries is challenging due to the need for satisfying both language syntax/semantics and constraints for constructing valid computational graphs. Recently, the TitanFuzz work demonstrates that modern Large Language Models (LLMs) can be directly leveraged to implicitly learn all the constraints to generate valid DL programs for fuzzing. However, LLMs tend to generate ordinary programs following similar patterns seen in their massive training corpora, while fuzzing favors unusual inputs that cover edge cases or are unlikely to be manually produced. To fill this gap, this paper proposes FuzzGPT, the first technique to prime LLMs to synthesize unusual programs for fuzzing. FuzzGPT is built on the well-known hypothesis that historical bug-triggering programs may include rare/valuable code ingredients important for bug finding. Traditional techniques leveraging such historical information require intensive human efforts to design dedicated generators and ensure the validity of generated programs. FuzzGPT demonstrates that this process can be fully automated via the intrinsic capabilities of LLMs (including fine-tuning and in-context learning), while being generalizable and applicable to challenging domains. While FuzzGPT can be applied with different LLMs, this paper focuses on the powerful GPT-style models: Codex and CodeGen. Moreover, FuzzGPT also shows the potential of directly leveraging the instruct-following capability of the recent ChatGPT for effective fuzzing. Evaluation on two popular DL libraries (PyTorch and TensorFlow) shows that FuzzGPT can substantially outperform TitanFuzz, detecting 76 bugs, with 49 already confirmed as previously unknown bugs, including 11 high-priority bugs or security vulnerabilities.
翻译:深度学习(DL)库缺陷会影响下游DL应用程序,强调了需要可靠的系统。生成有效的输入程序以模糊DL库对于构造有效的计算图的语言语法/语义和约束需要满足的需要具有挑战性。最近的TitanFuzz工作演示了现代大型语言模型(LLM)可以直接利用隐含学习所有约束条件的能力来生成模糊DL程序。但是,LLM倾向于生成普通程序,遵循其大规模训练语料库中的类似模式,而模糊依靠涵盖边缘案例或不太可能手动产生的不寻常输入。为了填补这一空白,本文提出了FuzzGPT,这是一种技术,可将LLM引导合成不寻常的程序以进行模糊。 FuzzGPT基于广为人知的假设,即历史性的触发缺陷程序可能包含重要的漏洞发现代码。利用这种历史信息的传统技术需要进行繁重的人力努力来设计专用发生器并确保所生成的程序的有效性。FuzzGPT演示了这个过程可以通过LLM的内在能力(包括微调和上下文学习)完全自动化,同时具有普遍性并适用于具有挑战性的领域。虽然FuzzGPT可以应用于不同的LLM,但本文重点介绍了强大的GPT样式模型:Codex和CodeGen。此外,FuzzGPT还展示了利用最近ChatGPT的跟随-遵循能力进行有效模糊的潜力。在两个流行的DL库(PyTorch和TensorFlow)上的评估表明,FuzzGPT可以显著优于TitanFuzz,检测出76个缺陷,其中49个已经确认为以前未知的缺陷,包括11个高优先级的缺陷或安全漏洞。