Labeling data can be an expensive task as it is usually performed manually by domain experts. This is cumbersome for deep learning, as it is dependent on large labeled datasets. Active learning (AL) is a paradigm that aims to reduce labeling effort by only using the data which the used model deems most informative. Little research has been done on AL in a text classification setting and next to none has involved the more recent, state-of-the-art Natural Language Processing (NLP) models. Here, we present an empirical study that compares different uncertainty-based algorithms with BERT$_{base}$ as the used classifier. We evaluate the algorithms on two NLP classification datasets: Stanford Sentiment Treebank and KvK-Frontpages. Additionally, we explore heuristics that aim to solve presupposed problems of uncertainty-based AL; namely, that it is unscalable and that it is prone to selecting outliers. Furthermore, we explore the influence of the query-pool size on the performance of AL. Whereas it was found that the proposed heuristics for AL did not improve performance of AL; our results show that using uncertainty-based AL with BERT$_{base}$ outperforms random sampling of data. This difference in performance can decrease as the query-pool size gets larger.
翻译:标签数据通常是由域专家手工完成的,因此可能是一项昂贵的任务。 这对于深层次学习来说是繁琐的, 因为它依赖于大标记的数据集。 积极学习( AL) 是一个范例, 目的是减少标签工作, 仅使用使用使用过的模型认为信息最丰富的数据。 在文本分类设置中, 对 AL 做了很少的研究, 而紧随其后, 涉及到最新的、 最新的、 最新的自然语言处理( NLP) 模型。 在此, 我们提出了一个实验性研究, 将不同的基于不确定性的算法与用过的分类师BER$ {base} 相比。 我们评估了两个 NLP 分类数据集的算法: 斯坦福· 森提门和 KvK- Frontpage 的算法。 此外, 我们探索了旨在解决基于不确定性的假设问题的超自然理论, 也就是说, 无法缩放, 并且很容易选择外部语言。 我们发现用于 AL 的查询库大小 。 但是, 我们发现, 用于 AL AL 以 $ 的超值 的超值 的计算结果 。