Distribution shift has been a longstanding challenge for the reliable deployment of deep learning (DL) models due to unexpected accuracy degradation. Although DL has been becoming a driving force for large-scale source code analysis in the big code era, limited progress has been made on distribution shift analysis and benchmarking for source code tasks. To fill this gap, this paper initiates to propose CodeS, a distribution shift benchmark dataset, for source code learning. Specifically, CodeS supports two programming languages (Java and Python) and five shift types (task, programmer, time-stamp, token, and concrete syntax tree). Extensive experiments based on CodeS reveal that 1) out-of-distribution detectors from other domains (e.g., computer vision) do not generalize to source code, 2) all code classification models suffer from distribution shifts, 3) representation-based shifts have a higher impact on the model than others, and 4) pre-trained bimodal models are relatively more resistant to distribution shifts.
翻译:由于出乎意料的准确性退化,分配转移一直是可靠部署深层次学习(DL)模型的长期挑战。虽然DL已成为大代码时代大规模源代码分析的驱动力,但在分配转移分析和源代码任务基准制定方面进展有限。为填补这一空白,本文件开始提出用于源代码学习的分布转移基准数据集CodeS(分布转移基准数据集)和源代码学习。具体地说,代码S支持两种编程语言(贾瓦和Python)和五种(task、程序员、时间戳、标志和具体语法树)以及五种(task、程序、时间戳、符号和具体语法树)。基于代码S的广泛实验显示,1)来自其他领域(例如计算机愿景)的分配以外的分配探测器并不普遍适用于源代码,2)所有代码分类模式都受到分配转移的影响,3)基于代表性的变动对模式的影响大于其他模式,4)预先培训的双调模式相对抵制分配变化。