This paper describes the performance of the team cs60075_team2 at SemEval 2021 Task 1 - Lexical Complexity Prediction. The main contribution of this paper is to fine-tune transformer-based language models pre-trained on several text corpora, some being general (E.g., Wikipedia, BooksCorpus), some being the corpora from which the CompLex Dataset was extracted, and others being from other specific domains such as Finance, Law, etc. We perform ablation studies on selecting the transformer models and how their individual complexity scores are aggregated to get the resulting complexity scores. Our method achieves a best Pearson Correlation of $0.784$ in sub-task 1 (single word) and $0.836$ in sub-task 2 (multiple word expressions).
翻译:本文介绍了SemEval 2021任务1(超复杂度预测)Cs60075_Team2团队的工作表现。本文的主要贡献是微调基于变压器的语言模型,先在几个文本公司(有些是一般性的(例如维基百科、书籍Corpus)上培训,有些是提取CompLex数据集的Corpora公司,而另一些则来自金融、法律等其它特定领域。我们对选择变压器模型及其个人复杂度分数如何汇总以获得由此产生的复杂分数进行了模拟研究。我们的方法在子任务1(单词)和子任务2(多字表达)中实现了0.836美元的最佳PearsonCorrlationalation,即0.784美元(单词)和0.836美元(多字表达)。