Active learning has emerged as a standard paradigm in areas with scarcity of labeled training data, such as in the medical domain. Language models have emerged as the prevalent choice of several natural language tasks due to the performance boost offered by these models. However, in several domains, such as medicine, the scarcity of labeled training data is a common issue. Also, these models may not work well in cases where class imbalance is prevalent. Active learning may prove helpful in these cases to boost the performance with a limited label budget. To this end, we propose a novel method using sampling techniques based on submodular optimization and optimal transport for active learning in language models, dubbed ALLWAS. We construct a sampling strategy based on submodular optimization of the designed objective in the gradient domain. Furthermore, to enable learning from few samples, we propose a novel strategy for sampling from the Wasserstein barycenters. Our empirical evaluations on standard benchmark datasets for text classification show that our methods perform significantly better (>20% relative increase in some cases) than existing approaches for active learning on language models.
翻译:在缺少标签培训数据的领域,例如医疗领域,积极学习已成为一种标准范例。语言模型由于这些模型的提高性能而成为几种自然语言任务的主要选择。然而,在医学等若干领域,缺乏标签培训数据是一个常见问题。此外,在班级不平衡普遍存在的情况下,这些模型可能效果不佳。积极学习在这些情况下可能有助于提高成绩,而标签预算有限。为此,我们提议采用一种新颖的方法,利用基于亚模式优化和最佳交通的抽样技术,在语言模型中积极学习,称为ALLWAS。我们根据梯度区域设计目标的亚模式优化制定抽样战略。此外,为了能够从少数样本中学习,我们提出了从瓦塞斯坦路路霸中心取样的新战略。我们对文本分类标准基准数据集的实证评价表明,我们的方法比现有的积极学习语言模型的方法要好得多(在某些情况下为20 % 相对增加) 。