Automatic code summarization is beneficial to software development and maintenance since it reduces the burden of manual tasks. Currently, artificial intelligence is undergoing a paradigm shift. The foundation models pretrained on massive data and finetuned to downstream tasks surpass specially customized models. This trend inspired us to consider reusing foundation models instead of learning from scratch. Based on this, we propose a flexible and robust approach for automatic code summarization based on neural networks. We assemble available foundation models, such as CodeBERT and GPT-2, into a single model named AdaMo. Moreover, we utilize Gaussian noise as the simulation of contextual information to optimize the latent representation. Furthermore, we introduce two adaptive schemes from the perspective of knowledge transfer, namely continuous pretraining and intermediate finetuning, and design intermediate stage tasks for general sequence-to-sequence learning. Finally, we evaluate AdaMo against a benchmark dataset for code summarization, by comparing it with state-of-the-art models.
翻译:自动代码总和有利于软件开发和维护,因为它减少了人工任务的负担。 目前,人工智能正在经历一种范式转变。基础模型先于大量数据,然后经过微调以适应下游任务,超过了特制模型。这个趋势激励我们考虑重用基础模型,而不是从零开始学习。基于这个趋势,我们提出了基于神经网络的自动代码总和的灵活和稳健方法。我们将可用的代码总和模型,如代码BERT和GPT-2, 合并成一个名为Adamo的单一模型。此外,我们利用高森噪音模拟背景信息,优化潜在代表。此外,我们从知识转让的角度引入了两种适应性计划,即连续的预培训和中期微调,并设计了普通序列到序列学习的中间阶段任务。最后,我们通过将Adamo与最先进的模型进行比较,对代码总和化的基准数据集进行评估。