Before researchers rush to reason across all available data, they should first check if the information is densest within some small region. We say this since, in 240 GitHub projects, we find that the information in that data ``clumps'' towards the earliest parts of the project. In fact, a defect prediction model learned from just the first 150 commits works as well, or better than state-of-the-art alternatives. Using just this early life cycle data, we can build models very quickly (using weeks, not months, of CPU time). Also, we can find simple models (with just two features) that generalize to hundreds of software projects. Based on this experience, we warn that prior work on generalizing software engineering defect prediction models may have needlessly complicated an inherently simple process. Further, prior work that focused on later-life cycle data now needs to be revisited since their conclusions were drawn from relatively uninformative regions. Replication note: all our data and scripts are online at https://github.com/snaraya7/early-defect-prediction-tse.
翻译:在研究人员匆忙对所有现有数据进行解释之前,他们应该首先检查某些小区域的信息是否最密集。我们这样说是因为在240个GitHub项目中,我们发现该数据中的信息“堆积点”对项目最早的部分来说是“我们发现 ” 。 事实上,刚从最初150个项目中学到的缺陷预测模型也意味着工作,或者比最先进的替代方法更好。仅仅使用这种早期生命周期数据,我们就可以非常迅速地建立模型(使用几周,而不是几个月的CPU时间)。此外,我们还可以找到简单模型(只有两个特点),可以概括成数百个软件项目。基于这一经验,我们警告说,以前关于一般软件工程缺陷预测模型的工作可能已经毫无必要地使一个内在简单的过程复杂化。此外,由于以前侧重于后期周期数据的工作是从相对不具有信息规范的区域得出,因此现在需要重新审查。 复制说明:我们的所有数据和脚本都在https://github.com/snaraya7/ear-deffect-presprediction-try-stese。