We analyze the ability of pre-trained language models to transfer knowledge among datasets annotated with different type systems and to generalize beyond the domain and dataset they were trained on. We create a meta task, over multiple datasets focused on the prediction of rhetorical roles. Prediction of the rhetorical role a sentence plays in a case decision is an important and often studied task in AI & Law. Typically, it requires the annotation of a large number of sentences to train a model, which can be time-consuming and expensive. Further, the application of the models is restrained to the same dataset it was trained on. We fine-tune language models and evaluate their performance across datasets, to investigate the models' ability to generalize across domains. Our results suggest that the approach could be helpful in overcoming the cold-start problem in active or interactvie learning, and shows the ability of the models to generalize across datasets and domains.
翻译:我们分析了经过培训的语文模式在具有不同类型系统的附加说明的数据集之间转让知识的能力,以及将知识推广到它们所培训的域名和数据集之外的能力。我们创建了一个元任务,涉及多个数据集,侧重于对空谈角色的预测。在AI & Law 中,判断一个判决在案例决定中所起的口头作用是一项重要和经常研究的任务。通常,它需要批注大量句子,以训练一个可能耗时和昂贵的模型。此外,模型的应用被限制在它所培训的同一数据集之外。我们微调语言模型并评估其跨数据集的性能,以调查模型在跨域的普及能力。我们的结果表明,该方法有助于在积极或互动学习中克服冷启动的问题,并显示模型在跨数据集和领域推广通用的能力。