It is common practice to discretize continuous defect counts into defective and non-defective classes and use them as a target variable when building defect classifiers (discretized classifiers). However, this discretization of continuous defect counts leads to information loss that might affect the performance and interpretation of defect classifiers. Another possible approach to build defect classifiers is through the use of regression models then discretizing the predicted defect counts into defective and non-defective classes (regression-based classifiers). In this paper, we compare the performance and interpretation of defect classifiers that are built using both approaches (i.e., discretized classifiers and regression-based classifiers) across six commonly used machine learning classifiers (i.e., linear/logistic regression, random forest, KNN, SVM, CART, and neural networks) and 17 datasets. We find that: i) Random forest based classifiers outperform other classifiers (best AUC) for both classifier building approaches; ii) In contrast to common practice, building a defect classifier using discretized defect counts (i.e., discretized classifiers) does not always lead to better performance. Hence we suggest that future defect classification studies should consider building regression-based classifiers (in particular when the defective ratio of the modeled dataset is low). Moreover, we suggest that both approaches for building defect classifiers should be explored, so the best-performing classifier can be used when determining the most influential features.
翻译:通常的做法是将连续的缺陷分解为有缺陷和没有缺陷的类别,并在建立缺陷分类(分解分类)时将其用作目标变量。然而,连续的缺陷分解导致信息损失,从而可能影响缺陷分类者的绩效和解释。另一种可能的办法是采用回归模型,然后将预测的缺陷分解为有缺陷和没有缺陷的类别(基于回归的分类)。在本文件中,我们比较了使用六种常用的机器学习分类(即线性/物流回归、随机森林、KNNN、SVM、CART和神经网络)和17个数据集两种方法(即使用回归模型,然后将预测的缺陷分解为有缺陷和不偏差的类别(基于回归的分类)。在采用分解的分类方法时,我们比较了常用的缺陷分类方法(即分解的分类和基于回归的分解的分类)作为目标变量。