The spread of information through social media platforms can create environments possibly hostile to vulnerable communities and silence certain groups in society. To mitigate such instances, several models have been developed to detect hate and offensive speech. Since detecting hate and offensive speech in social media platforms could incorrectly exclude individuals from social media platforms, which can reduce trust, there is a need to create explainable and interpretable models. Thus, we build an explainable and interpretable high performance model based on the XGBoost algorithm, trained on Twitter data. For unbalanced Twitter data, XGboost outperformed the LSTM, AutoGluon, and ULMFiT models on hate speech detection with an F1 score of 0.75 compared to 0.38 and 0.37, and 0.38 respectively. When we down-sampled the data to three separate classes of approximately 5000 tweets, XGBoost performed better than LSTM, AutoGluon, and ULMFiT; with F1 scores for hate speech detection of 0.79 vs 0.69, 0.77, and 0.66 respectively. XGBoost also performed better than LSTM, AutoGluon, and ULMFiT in the down-sampled version for offensive speech detection with F1 score of 0.83 vs 0.88, 0.82, and 0.79 respectively. We use Shapley Additive Explanations (SHAP) on our XGBoost models' outputs to makes it explainable and interpretable compared to LSTM, AutoGluon and ULMFiT that are black-box models.
翻译:通过社交媒体平台传播信息,可以创造可能对弱势社区有敌意的环境,并使社会某些群体沉默。为了缓解这种情况,已经开发了几种模型来检测仇恨和冒犯性言论。由于在社交媒体平台中发现仇恨和冒犯性言论可能会错误地将个人排除在社交媒体平台之外,从而降低信任度,因此需要创建可解释和可解释的模式。因此,我们根据以Twitter数据培训的XGBoost算法,建立了一个可以解释和解释的高性能模式。关于不平衡的Twitter数据,XGboost比LSTM、AutoGluon和ULMFiT的仇恨言论检测模式高,F1得分为0.75分,而分别为0.38和0.37和0.38。当我们向三个不同的类别(大约5000 Twitter、AutoGluon和ULMFiT)抽取的数据比LSTM、Xborable 0.69 和0.8S Oralex 和ULMFS Oral)调,我们分别用0.88、0.8S 和0.88S 和0.8S NS Ex Ex 和ULFLFLULULULULULF 的检测,分别使用。