Many methods in defect prediction are "datahungry"; i.e. (1) given a choice of using more data, or some smaller sample, researchers assume that more is better; (2) when data is missing, researchers take elaborate steps to transfer data from another project; and (3) given a choice of older data or some more recent sample, researchers usually ignore older data. Based on the analysis of hundreds of popular Github projects (with 1.2 million commits), we suggest that for defect prediction, there is limited value in such data-hungry approaches. Data for our sample of projects last for 84 months and contains 3,728 commits (median values). Across these projects, most of the defects occur very early in their life cycle. Hence, defect predictors learned from the first 150 commits and four months perform just as well as anything else. This means that, contrary to the "data-hungry" approach, (1) small samples of data from these projects are all that is needed for defect prediction; (2) transfer learning has limited value since it is needed only for the first 4 of 84 months (i.e. just 4% of the life cycle); (3) after the first few months, we need not continually update our defect prediction models. We hope these results inspire other researchers to adopt a "simplicity-first" approach to their work. Certainly, there are domains that require a complex and data-hungry analysis. But before assuming complexity, it is prudent to check the raw data looking for "short cuts" that simplify the whole analysis.
翻译:缺陷预测的许多方法都是“ 数据饥饿” ; 即 (1) 如果选择使用更多的数据, 或一些较小的抽样, 研究人员认为情况更好些; (2) 如果数据缺失, 研究人员采取详细步骤从另一个项目传输数据; (3) 如果选择了更老的数据或最近的一些抽样, 研究人员通常忽视更老的数据。 根据对几百个受欢迎的Github项目的分析( 120万人承诺), 我们建议, 对于缺陷预测而言, 这种数据饥饿方法的价值有限。 我们的抽样项目的数据持续84个月, 包含3 728个承诺( 中间值) 。 在这些项目中, 大多数缺陷发生在他们生命周期的很早阶段。 因此, 从最初150个承诺中学会的缺陷预测者, 4个月的工作表现和任何其他情况一样。 这意味着, 与“ 数据饥饿” 的方法相反, (1) 这些项目的数据的少量样本是所有缺陷预测所需要的; (2) 转移学习的价值有限,因为只需要84个月头4个月中的头4个月中( 也就是生命周期中只有4 % ) ; (3) 在这些项目中, 大部分的缺陷都发生在生命周期中。