Labeling data can be an expensive task as it is usually performed manually by domain experts. This is cumbersome for deep learning, as it is dependent on large labeled datasets. Active learning (AL) is a paradigm that aims to reduce labeling effort by only using the data which the used model deems most informative. Little research has been done on AL in a text classification setting and next to none has involved the more recent, state-of-the-art NLP models. Here, we present an empirical study that compares different uncertainty-based algorithms with BERT$_{base}$ as the used classifier. We evaluate the algorithms on two NLP classification datasets: Stanford Sentiment Treebank and KvK-Frontpages. Additionally, we explore heuristics that aim to solve presupposed problems of uncertainty-based AL; namely, that it is unscalable and that it is prone to selecting outliers. Furthermore, we explore the influence of the query-pool size on the performance of AL. Whereas it was found that the proposed heuristics for AL did not improve performance of AL; our results show that using uncertainty-based AL with BERT$_{base}$ outperforms random sampling of data. This difference in performance can decrease as the query-pool size gets larger.
翻译:标签数据通常是由域专家手工完成的,因此可能是一项昂贵的任务。 这对于深层次学习来说是繁琐的, 因为它依赖于有标签的大型数据集。 积极学习( AL) 是一个范例, 目的是减少标签工作, 仅使用使用使用过的模型认为信息最丰富的数据。 在文本分类设置中, 对AL 做了很少的研究, 而紧接着没有做任何研究, 涉及到最新的、 最新的、 最新的NLP 模型。 这里, 我们介绍了一项经验性研究, 将不同的基于不确定性的算法与用过的分类师BER$ {base} 相比。 我们评估了两个 NLP 分类数据集的算法: Stanford Senttiment Treebank 和 KvK- Frontpages 。 此外, 我们探索了旨在解决基于不确定性的AL 的假设问题的超常性能, 也就是说, 我们用基于不确定性的随机性能来缩小AL 。