Code search, which aims at retrieving the most relevant code fragment for a given natural language query, is a common activity in software development practice. Recently, contrastive learning is widely used in code search research, where many data augmentation approaches for source code (e.g., semantic-preserving program transformation) are proposed to learn better representations. However, these augmentations are at the raw-data level, which requires additional code analysis in the preprocessing stage and additional training costs in the training stage. In this paper, we explore augmentation methods that augment data (both code and query) at representation level which does not require additional data processing and training, and based on this we propose a general format of representation-level augmentation that unifies existing methods. Then, we propose three new augmentation methods (linear extrapolation, binary interpolation, and Gaussian scaling) based on the general format. Furthermore, we theoretically analyze the advantages of the proposed augmentation methods over traditional contrastive learning methods on code search. We experimentally evaluate the proposed representation-level augmentation methods with state-of-the-art code search models on a large-scale public dataset consisting of six programming languages. The experimental results show that our approach can consistently boost the performance of the studied code search models. Our source code is available at https://github.com/Alex-HaochenLi/RACS.
翻译:代码搜索旨在为特定自然语言查询检索最相关的代码碎片,是软件开发实践中的一项常见活动。最近,在代码搜索研究中广泛使用对比学习,建议对源代码(例如语义保存程序转换)采用许多数据增强方法,以更好地进行表述;然而,这些增强是在原始数据层面,这需要在预处理阶段进行额外的代码分析,在培训阶段需要额外的培训费用。在本文件中,我们探索增强方法,在代表级别上增加数据(代码和查询),不需要额外的数据处理和培训,并在此基础上,我们提议了代表级别增强的一般格式,以统一现有方法。然后,我们根据一般格式提出了三种新的增强方法(线外推法、二元间推法和高斯级缩法)。此外,我们从理论上分析了拟议增强方法相对于传统对代码搜索的对比学习方法的优势。我们实验性地评估了拟议的代表级别增强方法,在大规模公共代码搜索模型上采用州级代码搜索模型,从而统一现有方法。我们一直在进行搜索。