Although deep neural models substantially reduce the overhead of feature engineering, the features readily available in the inputs might significantly impact training cost and the performance of the models. In this paper, we explore the impact of an unsuperivsed feature enrichment approach based on variable roles on the performance of neural models of code. The notion of variable roles (as introduced in the works of Sajaniemi et al. [Refs. 1,2]) has been found to help students' abilities in programming. In this paper, we investigate if this notion would improve the performance of neural models of code. To the best of our knowledge, this is the first work to investigate how Sajaniemi et al.'s concept of variable roles can affect neural models of code. In particular, we enrich a source code dataset by adding the role of individual variables in the dataset programs, and thereby conduct a study on the impact of variable role enrichment in training the Code2Seq model. In addition, we shed light on some challenges and opportunities in feature enrichment for neural code intelligence models.
翻译:虽然深层神经模型大大降低了地物工程的间接费用,但投入中现成的特征可能会对培训成本和模型的性能产生重大影响。在本文件中,我们探讨了基于神经代码模型性能的不同作用的未经管理地物浓缩方法的影响。发现(在Sajaaniemi et al.[Refs. 1,2] 的作品中引入的)变量作用概念有助于学生的方案编制能力。在本文件中,我们调查这一概念是否会改善神经代码模型的性能。据我们所知,这是调查Sajaaniemi 等人的变量作用概念如何影响神经代码模型的首次工作。特别是,我们通过在数据集方案中增加单个变量的作用来丰富源码数据集数据集,从而对变量浓缩在培训代码2Seq模型中的影响进行研究。此外,我们介绍了神经代码智能模型特征浓缩的一些挑战和机遇。</s>