A number of different performance metrics are commonly used in the machine learning literature for classification systems that output categorical decisions. Some of the most common ones are accuracy, total error (one minus accuracy), balanced accuracy, balanced total error (one minus balanced accuracy), F-score, and Matthews correlation coefficient (MCC). In this document, we review the definition of these metrics and compare them with the expected cost (EC), a metric introduced in every statistical learning course but rarely used in the machine learning literature. We show that the empirical estimate of the EC is a generalized version of both the total error and balanced total error. Further, we show its relation with F-score and MCC and argue that EC is superior to them, being more general, simpler, intuitive and well motivated. We highlight some issues with the F-score and the MCC that make them suboptimal metrics. While not explained in the current version of this manuscript, where we focus exclusively on metrics that are computed over hard decisions, the EC has the additional advantage of being a great tool to measure calibration of a system's scores and allows users to make optimal decisions given a set of posteriors for each class. We leave that discussion for a future version of this manuscript.
翻译:计算机学习文献中通常使用若干不同的业绩计量,用于得出绝对决定的分类系统。一些最常见的指标是准确性、总误差(一个减精确)、平衡准确性、平衡性总误差(一个减平衡准确性)、F-Score和Matthews相关系数(MCC)。在本文件中,我们审查了这些指标的定义,并将其与预期成本(EC)进行比较,这是每个统计学习课程中引入的、但很少在机器学习文献中使用的衡量标准。我们表明,对EC的经验估计是总误差和平衡性总误差的通用版本。此外,我们展示了它与F-Score和MCC的关系,认为EC优于它们,更一般、更简单、直观和动机良好。我们强调F-Scor和MCC的一些问题,使它们成为次级优化计量标准。我们目前版本的手稿没有解释,我们专门侧重于根据困难决定计算的计量标准,但EC的另一个好处是,它是衡量系统分数的通用工具,使用户能够作出最佳决定,给每个类别提供最佳版本的手稿。