Labeling a module defective or non-defective is an expensive task. Hence, there are often limits on how much-labeled data is available for training. Semi-supervised classifiers use far fewer labels for training models, but there are numerous semi-supervised methods, including self-labeling, co-training, maximal-margin, and graph-based methods, to name a few. Only a handful of these methods have been tested in SE for (e.g.) predicting defects and even that, those tests have been on just a handful of projects. This paper takes a wide range of 55 semi-supervised learners and applies these to over 714 projects. We find that semi-supervised "co-training methods" work significantly better than other approaches. However, co-training needs to be used with caution since the specific choice of co-training methods needs to be carefully selected based on a user's specific goals. Also, we warn that a commonly-used co-training method ("multi-view"-- where different learners get different sets of columns) does not improve predictions (while adding too much to the run time costs 11 hours vs. 1.8 hours). Those cautions stated, we find using these "co-trainers," we can label just 2.5% of data, then make predictions that are competitive to those using 100% of the data. It is an open question worthy of future work to test if these reductions can be seen in other areas of software analytics. All the codes used and datasets analyzed during the current study are available in the https://GitHub.com/Suvodeep90/Semi_Supervised_Methods.
翻译:模块有缺陷或不完善的标签标签是一个昂贵的任务。 因此, 通常对培训可用的标签数据数量有限制。 培训模式使用的比例要少得多, 但有多种半监督方法, 包括自我标签、 共同培训、 最大边际和基于图形的方法, 举几个例子。 这些方法中只有一小部分在SE 中测试过( 例如) 预测缺陷, 即使这些测试是在少数项目上进行的。 本文需要广泛的55个半监督的学习者, 并将这些数据应用于超过714个项目。 我们发现, 半监督的“ 联合培训方法” 比其他方法要好得多。 但是, 需要谨慎地使用联合培训方法, 因为具体选择联合培训方法需要根据用户的具体目标仔细选择。 另外, 我们警告说, 常用的公开培训方法( “ 多视图 ” - 不同学习者在其中获得不同的栏目) 。 本文的预测没有改进( 与此同时, 使用更有价值的“ 联合培训方法” 。 这些测试时间是“ 25小时 ” 。 我们使用这些数据测试“ 。 这些工具使用这些时间 。