Recently, several studies have investigated active learning (AL) for natural language processing tasks to alleviate data dependency. However, for query selection, most of these studies mainly rely on uncertainty-based sampling, which generally does not exploit the structural information of the unlabeled data. This leads to a sampling bias in the batch active learning setting, which selects several samples at once. In this work, we demonstrate that the amount of labeled training data can be reduced using active learning when it incorporates both uncertainty and diversity in the sequence labeling task. We examined the effects of our sequence-based approach by selecting weighted diverse in the gradient embedding approach across multiple tasks, datasets, models, and consistently outperform classic uncertainty-based sampling and diversity-based sampling.
翻译:最近,有几项研究调查了自然语言处理任务的积极学习(AL),以减轻数据依赖性,然而,对于查询选择,大多数这些研究主要依靠基于不确定性的抽样,一般不利用未贴标签数据的结构信息,这导致分批积极学习环境中的抽样偏差,即同时选择若干样本。在这项工作中,我们证明,如果在序列标签任务中既包括不确定性,也包括多样性,则使用积极学习,可以减少标签培训数据的数量。我们通过选择跨多项任务、数据集、模型和一贯优于典型的基于不确定性的抽样和基于多样性的抽样,对基于序列的方法进行了加权,从而检查了我们采用的方法的影响。