We experiment with COVID-Twitter-BERT and RoBERTa models to identify informative COVID-19 tweets. We further experiment with adversarial training to make our models robust. The ensemble of COVID-Twitter-BERT and RoBERTa obtains a F1-score of 0.9096 (on the positive class) on the test data of WNUT-2020 Task 2 and ranks 1st on the leaderboard. The ensemble of the models trained using adversarial training also produces similar result.
翻译:我们实验了COVID-Twitter-BERT和RobERTA模型,以识别信息丰富的COVID-19推特,我们进一步实验了对抗性培训,以使我们的模型更加坚固,COVID-Twitter-BERT和RoBERTA的组合在WNUT-2020任务2的测试数据上获得了0.9096的F-1分数(正级),并在领导板上排名第一,使用对抗性培训所培训的模型的组合也产生了类似的结果。