Deep neural networks (DNNs) are notorious for making more mistakes for the classes that have substantially fewer samples than the others during training. Such class imbalance is ubiquitous in clinical applications and very crucial to handle because the classes with fewer samples most often correspond to critical cases (e.g., cancer) where misclassifications can have severe consequences. Not to miss such cases, binary classifiers need to be operated at high True Positive Rates (TPR) by setting a higher threshold but this comes at the cost of very high False Positive Rates (FPR) for problems with class imbalance. Existing methods for learning under class imbalance most often do not take this into account. We argue that prediction accuracy should be improved by emphasizing reducing FPRs at high TPRs for problems where misclassification of the positive, i.e., critical, class samples are associated with higher cost. To this end, we pose the training of a DNN for binary classification as a constrained optimization problem and introduce a novel constraint that can be used with existing loss functions to enforce maximal area under the ROC curve (AUC) through prioritizing FPR reduction at high TPR. We solve the resulting constrained optimization problem using an Augmented Lagrangian method (ALM). Going beyond binary, we also propose two possible extensions of the proposed constraint for multi-class classification problems. We present experimental results for image-based binary and multi-class classification applications using an in-house medical imaging dataset, CIFAR10, and CIFAR100. Our results demonstrate that the proposed method improves the baselines in majority of the cases by attaining higher accuracy on critical classes while reducing the misclassification rate for the non-critical class samples.
翻译:深神经网络(DNNS)臭名昭著,因为那些在培训期间的样本比其他样本要少得多的班级犯了更多的错误。这种班级不平衡在临床应用中普遍存在,而且对于处理来说非常重要,因为样品较少的班级往往与关键案例(如癌症)相对应,因为分类错误会产生严重后果。为了避免出现这种情况,二进制分类员需要以高真实正率(TPR)运行,为此设定了一个较高的门槛,但这要以非常高的假正率(FPR)的代价为代价,因为由于班级不平衡的现有学习方法往往没有考虑到这一点。我们认为,应该提高预测准确性,在高TPR(TPR)的班级中,强调在高级TR)的班级中减少FPR(FPR),因为对正数(即临界值)的分类错误可能带来严重后果。为此,我们要将二进制分类的DNN(DN)培训作为基于限制的优化问题,并引入新的制约因素,用现有的损失功能在ROC曲线下执行最高区域分类(ARC)下的最大领域(AUC),通过在最高级的级别上优先减少FPR(TRALA)的降级降压压压压压,同时提出多级的压我们可能提出的多级的机级(ROLI),在高的压压压压的压压压压压压压压压的压。