Models based on bidirectional encoder representations from transformers (BERT) produce state of the art (SOTA) results on many natural language processing (NLP) tasks such as named entity recognition (NER), part-of-speech (POS) tagging etc. An interesting phenomenon occurs when classifying long documents such as those from the US supreme court where BERT-based models can be considered difficult to use on a first-pass or out-of-the-box basis. In this paper, we experiment with several BERT-based classification techniques for US supreme court decisions or supreme court database (SCDB) and compare them with the previous SOTA results. We then compare our results specifically with SOTA models for long documents. We compare our results for two classification tasks: (1) a broad classification task with 15 categories and (2) a fine-grained classification task with 279 categories. Our best result produces an accuracy of 80\% on the 15 broad categories and 60\% on the fine-grained 279 categories which marks an improvement of 8\% and 28\% respectively from previously reported SOTA results.
翻译:基于双向编码器表示来自转换器 (BERT) 的模型在许多自然语言处理 (NLP) 任务上产生了最先进 (SOTA) 的结果,如命名实体识别 (NER)、词性 (POS) 标注等。当分类长文档,如源自美国最高法院的文档时,使用BERT-Based模型进行首次或开箱即用的 considered难度大。在本文中,我们尝试了几种基于BERT的美国最高法院决定或最高法院数据库 (SCDB) 的分类技术,并将它们与先前的SOTA结果进行比较。然后,我们将我们的结果与长文档的SOTA模型进行了比较。我们将我们的结果分为两个分类任务:(1)广泛的15个类别的分类任务和(2)细粒度的279个类别的分类任务。我们最好的结果在15个广泛类别上产生了80\%的准确度,在279个细粒度类别上产生了60\%的准确度,这标志着分别从先前报告的SOTA结果提高了8%和28%。