Optimal performance is critical for decision-making tasks from medicine to autonomous driving, however common performance measures may be too general or too specific. For binary classifiers, diagnostic tests or prognosis at a timepoint, measures such as the area under the receiver operating characteristic curve, or the area under the precision recall curve, are too general because they include unrealistic decision thresholds. On the other hand, measures such as accuracy, sensitivity or the F1 score are measures at a single threshold that reflect an individual single probability or predicted risk, rather than a range of individuals or risk. We propose a method in between, deep ROC analysis, that examines groups of probabilities or predicted risks for more insightful analysis. We translate esoteric measures into familiar terms: AUC and the normalized concordant partial AUC are balanced average accuracy (a new finding); the normalized partial AUC is average sensitivity; and the normalized horizontal partial AUC is average specificity. Along with post-test measures, we provide a method that can improve model selection in some cases and provide interpretation and assurance for patients in each risk group. We demonstrate deep ROC analysis in two case studies and provide a toolkit in Python.
翻译:最佳性能对于从医学到自主驱动的决策任务至关重要,但共同性效绩措施可能过于笼统或过于具体。对于二进制分类器,诊断性测试或预测在某一时间点的预测,诸如接收器操作特征曲线下的区域或精确召回曲线下的区域等措施过于笼统,因为它们包括不切实际的决定阈值。另一方面,准确性、灵敏度或F1分等措施是单一门槛的衡量标准,反映个别单一概率或预测风险,而不是一系列个人或风险。我们建议一种方法,在深入的ROC分析中审查概率或预测风险群体,以便进行更深入的分析。我们将隐性措施转化为熟悉的术语:ACU和正态同源部分ACU是平衡的平均精度(新发现);非常规部分AUC是平均灵敏度;正常的横向部分ACU是平均精度。除了测试后的措施外,我们还提供了一种方法,可以改进某些情况下的模式选择,并为每个风险群体的病人提供解释和保证。我们在两个案例研究中展示深入的ROC分析,并在Pyth中提供工具包。