Defect prediction models can be beneficial to prioritize testing, analysis, or code review activities, and has been the subject of a substantial effort in academia, and some applications in industrial contexts. A necessary precondition when creating a defect prediction model is the availability of defect data from the history of projects. If this data is noisy, the resulting defect prediction model could result to be unreliable. One of the causes of noise for defect datasets is the presence of "dormant defects", i.e., of defects discovered several releases after their introduction. This can cause a class to be labeled as defect-free while it is not, and is, therefore "snoring". In this paper, we investigate the impact of snoring on classifiers' accuracy and the effectiveness of a possible countermeasure, i.e., dropping too recent data from a training set. We analyze the accuracy of 15 machine learning defect prediction classifiers, on data from more than 4,000 defects and 600 releases of 19 open source projects from the Apache ecosystem. Our results show that on average across projects: (i) the presence of dormant defects decreases the recall of defect prediction classifiers, and (ii) removing from the training set the classes that in the last release are labeled as not defective significantly improves the accuracy of the classifiers. In summary, this paper provides insights on how to create defects datasets by mitigating the negative effect of dormant defects on defect prediction.
翻译:偏差预测模型可能有利于将测试、分析或代码审查活动列为优先事项,并且是学术界大量努力的主题,也是工业领域一些应用的主题。创建缺陷预测模型的一个必要先决条件是提供项目历史中的缺陷数据。如果这一数据噪音大,由此产生的缺陷预测模型可能会导致不可靠。造成缺陷数据集噪音的原因之一是存在“污点缺陷”,即在引入后发现了若干释放的缺陷。这可能导致一个类别在不时被贴上无缺陷标签,因此“不作检查”。在本文件中,我们调查打呼噜对叙级者准确性和可能采取的反措施的有效性的影响,即从培训中丢弃最近的数据。我们分析15个机器学习缺陷预测分类器的准确性,分析来自4 000多个缺陷的数据,以及19个来自阿帕奇生态系统的公开源项目释放的600个数据。我们的结果显示,各个项目的平均结果是:(一) 隐性缺陷的存在会减少缺陷预测分类师的记忆,因此“不育” 。在本文中,从分析缺陷的准确性分析中,如何通过提高文件的准确性,从而大大改善文件的准确性。