Active learning allows machine learning models to be trained using fewer labels while retaining similar performance to traditional fully supervised learning. An active learner selects the most informative data points, requests their labels, and retrains itself. While this approach is promising, it leaves an open problem of how to determine when the model is `good enough' without the additional labels required for traditional evaluation. In the past, different stopping criteria have been proposed aiming to identify the optimal stopping point. However, optimality can only be expressed as a domain-dependent trade-off between accuracy and the number of labels, and no criterion is superior in all applications. This paper is the first to give actionable advice to practitioners on what stopping criteria they should use in a given real-world scenario. We contribute the first large-scale comparison of stopping criteria, using a cost measure to quantify the accuracy/label trade-off, public implementations of all stopping criteria we evaluate, and an open-source framework for evaluating stopping criteria. Our research enables practitioners to substantially reduce labelling costs by utilizing the stopping criterion which best suits their domain.
翻译:主动学习可以使用较少的标签来训练机器学习模式,同时保持与传统全面监督的学习类似的业绩。 活跃的学习者选择了信息最丰富的数据点, 要求他们的标签, 并再次学习本身。 虽然这一方法很有希望, 但它在如何确定模型何时“足够好”方面留下了一个未加传统评价所需额外标签的开放问题。 过去, 提出了不同的停止标准, 目的是确定最佳的停止点。 但是, 最佳性能只能表现为精确度和标签数量之间的一个依赖域的权衡, 在所有应用中, 没有任何标准优异。 本文是第一个向实践者提供可操作的建议, 说明他们在特定现实世界情景中应该使用哪些停止标准。 我们第一次对停止标准进行大规模比较, 使用成本计量来量化准确/ 标签交易、 公众执行所有停止标准, 以及评估停止标准的一个公开源框架。 我们的研究使实践者能够使用最适合其域域的停止标准, 大幅降低标签成本。