Active Learning (AL) is a powerful tool for learning with less labeled data, in particular, for specialized domains, like legal documents, where unlabeled data is abundant, but the annotation requires domain expertise and is thus expensive. Recent works have shown the effectiveness of AL strategies for pre-trained language models. However, most AL strategies require a set of labeled samples to start with, which is expensive to acquire. In addition, pre-trained language models have been shown unstable during fine-tuning with small datasets, and their embeddings are not semantically meaningful. In this work, we propose a pipeline for effectively using active learning with pre-trained language models in the legal domain. To this end, we leverage the available unlabeled data in three phases. First, we continue pre-training the model to adapt it to the downstream task. Second, we use knowledge distillation to guide the model's embeddings to a semantically meaningful space. Finally, we propose a simple, yet effective, strategy to find the initial set of labeled samples with fewer actions compared to existing methods. Our experiments on Contract-NLI, adapted to the classification task, and LEDGAR benchmarks show that our approach outperforms standard AL strategies, and is more efficient. Furthermore, our pipeline reaches comparable results to the fully-supervised approach with a small performance gap, and dramatically reduced annotation cost. Code and the adapted data will be made available.
翻译:积极学习(AL) 是一个强大的学习工具,用标签较少的数据学习,特别是专业领域,如法律文件,没有标签的数据丰富,但注释需要域内专门知识,因此费用昂贵。最近的工作显示AL战略在预先培训的语言模式方面的效力。然而,大多数AL战略都需要一组标签样本,这是昂贵的。此外,在与小型数据集进行微调时,预先培训的语言模式显示不稳定,其嵌入在语义上没有意义。在这项工作中,我们建议建立一个管道,以便有效地利用法律领域预先培训的语言模式的积极学习。为此,我们利用已有的无标签数据,分三个阶段进行。首先,我们继续预先培训该模式,使其适应下游任务。第二,我们利用知识蒸馏来指导模型在具有真正意义的空间嵌入。最后,我们建议了一个简单、有效、战略,以比现有方法更少的行动来寻找首套标签样本。我们关于合同-NLILI的实验,以更低的成本和更高的标准格式展示了我们的数据格式,并展示了我们的数据格式的升级和升级的方法。