Artificial Intelligence and Machine Learning have witnessed rapid, significant improvements in Natural Language Processing (NLP) tasks. Utilizing Deep Learning, researchers have taken advantage of repository comments in Software Engineering to produce accurate methods for detecting Self-Admitted Technical Debt (SATD) from 20 open-source Java projects' code. In this work, we improve SATD detection with a novel approach that leverages the Bidirectional Encoder Representations from Transformers (BERT) architecture. For comparison, we re-evaluated previous deep learning methods and applied stratified 10-fold cross-validation to report reliable F$_1$-scores. We examine our model in both cross-project and intra-project contexts. For each context, we use re-sampling and duplication as augmentation strategies to account for data imbalance. We find that our trained BERT model improves over the best performance of all previous methods in 19 of the 20 projects in cross-project scenarios. However, the data augmentation techniques were not sufficient to overcome the lack of data present in the intra-project scenarios, and existing methods still perform better. Future research will look into ways to diversify SATD datasets in order to maximize the latent power in large BERT models.
翻译:人工智能和机器学习在自然语言处理(NLP)任务方面取得了长足的进展。利用深度学习,研究人员利用软件工程中的仓库评论来生成准确的自陈技术债务(SATD)检测方法,这些仓库包含20个开源Java项目的代码。在本研究中,我们利用双向编码器表示变换器(BERT)架构改进了SATD检测的方法。为了比较,我们重新评估了先前的深度学习方法,并应用分层10折交叉验证,以报告可靠的F$_1$得分。我们在跨项目和项目内上检查了我们的模型。对于每个上下文,我们使用重新采样和复制作为增强策略来解决数据不平衡的问题。我们发现,我们训练的BERT模型在跨项目方案的19个项目中都优于先前所有方法的最佳性能。然而,数据增强技术不足以克服项目内场景中存在的数据缺乏问题,现有方法仍然表现更好。未来研究将探讨多样化SATD数据集的方法,以最大化大型BERT模型的潜力。