Data-serialization libraries are essential tools in software development, responsible for converting between programmable data structures and data persistence formats. Among them, JSON is the most popular choice for exchanging data between different systems and programming languages, while JSON libraries serve as the programming toolkit for this task. Despite their widespread use, bugs in JSON libraries can cause severe issues such as data inconsistencies and security vulnerabilities. Unit test generation techniques are widely adopted to identify bugs in various libraries. However, there is limited systematic testing effort specifically for exposing bugs within JSON libraries in industrial practice. In this paper, we propose JSONTestGen, an approach leveraging large language models (LLMs) to generate unit tests for fastjson2, a popular open source JSON library from Alibaba. Pre-trained on billions of open-source text and code corpora, LLMs have demonstrated remarkable abilities in programming tasks. Based on historical bug-triggering unit tests, we utilize LLMs to generate more diverse test cases by incorporating JSON domain-specific mutation rules. To systematically and efficiently identify potential bugs, we adopt differential testing on the results of the generated unit tests. Our evaluation shows that JSONTestGen outperforms existing test generation tools in unknown defect detection. With JSONTestGen, we found 34 real bugs in fastjson2, 30 of which have already been fixed, including 12 non-crashing bugs. While manual inspection reveals that LLM-generated tests can be erroneous, particularly with self-contradictory assertions, we demonstrate that LLMs have the potential for classifying false-positive test failures. This suggests a promising direction for improved test oracle automation in the future.
翻译:暂无翻译