The development of state-of-the-art systems in different applied areas of machine learning (ML) is driven by benchmarks, which have shaped the paradigm of evaluating generalisation capabilities from multiple perspectives. Although the paradigm is shifting towards more fine-grained evaluation across diverse tasks, the delicate question of how to aggregate the performances has received particular interest in the community. In general, benchmarks follow the unspoken utilitarian principles, where the systems are ranked based on their mean average score over task-specific metrics. Such aggregation procedure has been viewed as a sub-optimal evaluation protocol, which may have created the illusion of progress. This paper proposes Vote'n'Rank, a framework for ranking systems in multi-task benchmarks under the principles of the social choice theory. We demonstrate that our approach can be efficiently utilised to draw new insights on benchmarking in several ML sub-fields and identify the best-performing systems in research and development case studies. The Vote'n'Rank's procedures are more robust than the mean average while being able to handle missing performance scores and determine conditions under which the system becomes the winner.
翻译:在不同应用的机器学习领域发展最先进的系统(ML)是由基准驱动的,这些基准从多种角度塑造了评价通用能力的范式。虽然该范式正在转向对不同任务进行更精细的评价,但如何综合各种业绩的微妙问题在社区中特别受到关注。一般而言,基准遵循不言而喻的功利主义原则,根据这些系统在具体任务指标上的平均得分来排列其等级。这种汇总程序被视为亚最佳评价程序,可能创造了进步的幻觉。本文提议了Voice'n'Rank,这是社会选择理论原则下多任务基准的排名体系框架。我们证明,我们的方法可以有效地利用,对若干ML子领域的基准进行新的认识,并确定研发案例研究中最优秀的系统。Votice'n'Rank的程序比平均程序更健全,同时能够处理缺失的业绩分数,并确定系统成为赢家的条件。