Deep Learning (DL) libraries, such as PyTorch, are widely used for building and deploying DL models on various hardware platforms. Meanwhile, they are found to contain bugs that lead to incorrect calculation results and cause issues like non-convergence training and inaccurate prediction of DL models. Thus, many efforts have been made to test DL libraries and reveal bugs. However, existing DL library testing methods manifest limitations: model-level testing methods cause complexity in fault localization. Meanwhile, API-level testing methods often generate invalid inputs or primarily focus on extreme inputs that lead to crash failures; they also ignore testing realistic API interactions. These limitations may lead to missing detection of bugs, even in the frequently used APIs. To address these limitations, we propose SORT (Subgraph-Oriented Realistic Testing) to differential test DL libraries on different hardware platforms. SORT takes popular API interaction patterns, represented as frequent subgraphs of model computation graphs, as test subjects. In this way, it introduces realistic API interaction sequences while maintaining efficiency in locating faulty APIs for observed errors. Besides, SORT prepares test inputs by referring to extensive features of runtime inputs for each API in executing real-life benchmark data. The generated inputs are expected to better simulate such valid real inputs and reveal bugs more likely to happen in real-life usage. Evaluation on 728 frequent subgraphs of 49 popular PyTorch models demonstrates that SORT achieves a 100\% valid input generation rate, detects more precision bugs than existing methods, and reveals interaction-related bugs missed by single-API testing. 18 precision bugs in PyTorch are identified.
翻译:暂无翻译