The paper describes neural models developed for the DAGPap22 shared task hosted at the Third Workshop on Scholarly Document Processing. This shared task targets the automatic detection of generated scientific papers. Our work focuses on comparing different transformer-based models as well as using additional datasets and techniques to deal with imbalanced classes. As a final submission, we utilized an ensemble of SciBERT, RoBERTa, and DeBERTa fine-tuned using random oversampling technique. Our model achieved 99.24% in terms of F1-score. The official evaluation results have put our system at the third place.
翻译:本文介绍了为第三次学术文件处理讲习班主办的DAGPap22共同任务开发的神经模型,这一共同任务的目标是自动检测产生的科学论文,我们的工作重点是比较不同的变压器模型,以及利用额外的数据集和技术处理不平衡的等级。作为最后一份呈文,我们利用随机过度抽样技术对SciBERT、RoBERTA和DeBERTA进行了微调,我们的模式在F1-Score中达到了99.24%。官方评价结果将我们的系统置于第三位。