This work describes the development of different models to detect patronising and condescending language within extracts of news articles as part of the SemEval 2022 competition (Task-4). This work explores different models based on the pre-trained RoBERTa language model coupled with LSTM and CNN layers. The best models achieved 15$^{th}$ rank with an F1-score of 0.5924 for subtask-A and 12$^{th}$ in subtask-B with a macro-F1 score of 0.3763.
翻译:这项工作介绍了作为SemEval 2022竞赛(Task-4)的一部分,为在新闻文章摘录中发现赞助和居高临下的语言而开发不同模式的情况,这项工作探索了以事先培训的RoBERTA语言模式以及LSTM和CNN层次为基础的不同模式,最佳模式达到15美元,F1分数为0.5924美元,亚塔斯克-A分为1 266美元,亚塔斯克-B分为1 2美元,宏观F1分数为0.3763美元。