Representational Similarity Analysis is a method from cognitive neuroscience, which helps in comparing representations from two different sources of data. In this paper, we propose using Representational Similarity Analysis to probe the semantic grounding in language models of code. We probe representations from the CodeBERT model for semantic grounding by using the data from the IBM CodeNet dataset. Through our experiments, we show that current pre-training methods do not induce semantic grounding in language models of code, and instead focus on optimizing form-based patterns. We also show that even a little amount of fine-tuning on semantically relevant tasks increases the semantic grounding in CodeBERT significantly. Our ablations with the input modality to the CodeBERT model show that using bimodal inputs (code and natural language) over unimodal inputs (only code) gives better semantic grounding and sample efficiency during semantic fine-tuning. Finally, our experiments with semantic perturbations in code reveal that CodeBERT is able to robustly distinguish between semantically correct and incorrect code.
翻译:代表相似性分析是一种来自认知神经科学的方法,它有助于比较来自两个不同数据来源的表述。 在本文中,我们提议使用代表相似性分析来探测语言代码模型中的语义地基。 我们通过使用 IBM 代码网 数据集的数据来探测代码BERT 模型中的语义地基的表述。 我们通过实验, 显示目前的培训前方法不会在代码的语言模型中产生语义地基, 而是侧重于优化基于形式的模式。 我们还表明,即使是对语义相关任务的微量微调也会大大增加代码BERT 中的语义地基。 我们对代码BERT 模型输入方式的校略显示, 使用双模式输入( 代码和自然语言) 超过单形式输入( 仅代码) 在语义微调过程中, 语义地基和样本效率会更好。 最后, 我们在代码中用语义性扰动的实验显示, 代码能够有力地区分语义正确和不正确的代码。