Automatically generated static code warnings suffer from a large number of false alarms. Hence, developers only take action on a small percent of those warnings. To better predict which static code warnings should not be ignored, we suggest that analysts need to look deeper into their algorithms to find choices that better improve the particulars of their specific problem. Specifically, we show here that effective predictors of such warnings can be created by methods that locally adjust the decision boundary (between actionable warnings and others). These methods yield a new high water-mark for recognizing actionable static code warnings. For eight open-source Java projects (CASSANDRA, JMETER, COMMONS, LUCENE-SOLR, ANT, TOMCAT, DERBY) we achieve perfect test results on 4/8 datasets and, overall, a median AUC (area under the true negatives, true positives curve) of 92\%.
翻译:自动生成的静态代码警告受到大量虚假警报的影响。 因此, 开发者只对其中一小部分警告采取行动。 为了更好地预测哪些静态代码警告不应被忽略, 我们建议分析家需要更深入地研究其算法, 以找到更好的选择, 更好地改善他们具体问题的细节。 具体地说, 我们在这里显示, 此类警告的有效预测者可以通过当地调整决定界限的方法( 可在操作警告和其他警告之间) 产生新的高水标记, 以识别可操作的静态代码警告 。 对于八个开放源的爪哇项目( CASSANDRA、 JMETER、 COMMONS、 LUCENE- SOLR、 ANT、 TOMCAT、 DERBY), 我们在 4/8 数据集上取得了完美的测试结果, 以及总体而言, AUC( 真实负值、 真实正值曲线下的区域) 92 。