Hate speech is a severe issue that affects many online platforms. So far, several studies have been performed to develop robust hate speech detection systems. Large language models like ChatGPT have recently shown great potential in performing several tasks, including hate speech detection. However, it is crucial to comprehend the limitations of these models to build more robust hate speech detection systems. Thus to bridge the gap, our study aims to evaluate the weaknesses of the ChatGPT model in detecting hate speech at a granular level across 11 languages. In addition, we investigate the influence of complex emotions, such as the use of emojis in hate speech, on the performance of the ChatGPT model. Through our analysis, we examine and investigate the errors made by the model, shedding light on its shortcomings in detecting certain types of hate speech and highlighting the need for further research and improvements in hate speech detection.
翻译:暂无翻译