This paper describes our submission to the Competition on Legal Information Extraction/Entailment 2022 (COLIEE-2022) workshop on case law competition for tasks 1 and 2. Task 1 is a legal case retrieval task, which involves reading a new case and extracting supporting cases from the provided case law corpus to support the decision. Task 2 is the legal case entailment task, which involves the identification of a paragraph from existing cases that entails the decision in a relevant case. We employed the neural models Sentence-BERT and Sent2Vec for semantic understanding and the traditional retrieval model BM25 for exact matching in both tasks. As a result, our team ("nigam") ranked 5th among all the teams in Tasks 1 and 2. Experimental results indicate that the traditional retrieval model BM25 still outperforms neural network-based models.
翻译:本文介绍我们提交2022年法律信息提取/销售竞争问题研讨会(COLIEE-2022)的呈件,该研讨会涉及任务1和任务2的判例法竞争问题。 任务1是一项法律案例检索任务,涉及阅读一个新案件,并从提供的案例法中提取支持决定的佐证案件。任务2是一项法律案件所涉任务,涉及从一个相关案件中涉及裁决的现有案件中找出一个段落。我们使用神经模型-BERT和Sent2Vec进行语义理解和传统搜索模型BM25,以在两项任务中进行精确匹配。结果,我们的团队(“nigam”)在任务1和任务2的所有团队中名列第五。实验结果表明,传统的BM25型搜索模型仍然优于神经网络模型。