Comparing model performances on benchmark datasets is an integral part of measuring and driving progress in artificial intelligence. A model's performance on a benchmark dataset is commonly assessed based on a single or a small set of performance metrics. While this enables quick comparisons, it may entail the risk of inadequately reflecting model performance if the metric does not sufficiently cover all performance characteristics. It is unknown to what extent this might impact benchmarking efforts. To address this question, we analysed the current landscape of performance metrics based on data covering 3867 machine learning model performance results from the open repository 'Papers with Code'. Our results suggest that the large majority of metrics currently used have properties that may result in an inadequate reflection of a models' performance. While alternative metrics that address problematic properties have been proposed, they are currently rarely used. Furthermore, we describe ambiguities in reported metrics, which may lead to difficulties in interpreting and comparing model performances.
翻译:比较基准数据集的模型性能是衡量和推动人工智能进步的一个有机组成部分。基准数据集的模型性能通常根据单一或少量的性能衡量标准进行评估。虽然这样可以进行快速比较,但如果衡量标准没有充分涵盖所有性能特点,则可能导致模型性能反映不足的风险。不清楚这在多大程度上会影响基准工作。为解决这一问题,我们根据公开存放处“有代码的Papers”的3867机器性能学习模型性能数据,分析了目前的性能衡量标准状况。我们的结果显示,目前使用的绝大多数指标的特性可能导致对模型性能的反映不足。虽然提出了解决问题特性的替代指标,但目前很少使用这些替代指标。此外,我们描述了所报告的指标中的模糊之处,这可能导致难以解释和比较模型性能。