The massive spread of hate speech, hateful content targeted at specific subpopulations, is a problem of critical social importance. Automated methods of hate speech detection typically employ state-of-the-art deep learning (DL)-based text classifiers-large pretrained neural language models of over 100 million parameters, adapting these models to the task of hate speech detection using relevant labeled datasets. Unfortunately, there are only a few public labeled datasets of limited size that are available for this purpose. We make several contributions with high potential for advancing this state of affairs. We present HyperNetworks for hate speech detection, a special class of DL networks whose weights are regulated by a small-scale auxiliary network. These architectures operate at character-level, as opposed to word or subword-level, and are several orders of magnitude smaller compared to the popular DL classifiers. We further show that training hate detection classifiers using additional large amounts of automatically generated examples is beneficial in general, yet this practice especially boosts the performance of the proposed HyperNetworks. We report the results of extensive experiments, assessing the performance of multiple neural architectures on hate detection using five public datasets. The assessed methods include the pretrained language models of BERT, RoBERTa, ALBERT, MobileBERT and CharBERT, a variant of BERT that incorporates character alongside subword embeddings. In addition to the traditional setup of within-dataset evaluation, we perform cross-dataset evaluation experiments, testing the generalization of the various models in conditions of data shift. Our results show that the proposed HyperNetworks achieve performance that is competitive, and better in some cases, than these pretrained language models, while being smaller by orders of magnitude.
翻译:仇恨言论的大规模传播、针对特定亚群群的仇恨内容的大规模传播,是一个至关重要的社会问题。仇恨言论的自动检测方法通常采用最先进的深层次学习(DL)基于文字的文本分类(DL)特殊类别,其重量由小规模辅助网络调节。这些结构在字符级别运作,而不是文字或子字级别,与流行的DL分类者相比,规模小于几级。我们进一步表明,使用更多自动生成的实例进行仇恨检测的培训分类在总体上是有益的,但这种做法特别有助于推进这一状态。我们介绍了超链接检测的超链接,这是一个特殊类型的DL网络网络网络,其重量由小规模辅助网络管理。这些结构在字符级别运行,而不是字或子字级级别,这些模型与流行的DL分类者相比,规模较小。我们进一步表明,使用更多自动生成的实例进行培训的仇恨检测分类在总体上是有用的,但这种做法尤其能提高拟议超网络系统的绩效。我们报告了广泛实验的结果,评估了多种神经模型的性能性能由小规模的TLLLEO的常规模型进行,在测试中进行,在RERRER数据库中采用五种模式进行。