Active learning is a framework for supervised learning to improve the predictive performance by adaptively annotating a small number of samples. To realize efficient active learning, both an acquisition function that determines the next datum and a stopping criterion that determines when to stop learning should be considered. In this study, we propose a stopping criterion based on error stability, which guarantees that the change in generalization error upon adding a new sample is bounded by the annotation cost and can be applied to any Bayesian active learning. We demonstrate that the proposed criterion stops active learning at the appropriate timing for various learning models and real datasets.
翻译:积极学习是一个有监督的学习框架,目的是通过适应性地说明少数样本来改进预测性能。为了实现高效的积极学习,应当考虑获取功能决定下一个数据,而停止标准则决定何时停止学习。在本研究中,我们提出了一个基于错误稳定性的停止标准,该标准保证在添加新的样本时,一般化错误的变化受注释成本的约束,并可用于任何巴伊西亚积极学习。我们证明,拟议的标准停止了在各种学习模型和真实数据集的适当时间进行积极学习。