The task of text classification using Bidirectional based LSTM architectures is computationally expensive and time consuming to train. For this, transformers were discovered which effectively give good performance as compared to the traditional deep learning architectures. In this paper we present a performance based comparison between simple transformer based network and Res-CNN-BiLSTM based network for cyberbullying text classification problem. The results obtained show that transformer we trained with 0.65 million parameters has significantly being able to beat the performance of Res-CNN-BiLSTM with 48.82 million parameters for faster training speeds and more generalized metrics. The paper also compares the 1-dimensional character level embedding network and 100-dimensional glove embedding network with transformer.
翻译:使用基于双向LSTM的LSTM结构进行文本分类的任务在计算上是昂贵的,需要花费大量时间来培训。为此,发现变压器与传统的深层学习结构相比,能够有效地带来良好的业绩。在本文件中,我们对基于简单的变压器网络和基于Res-CNN-BILSTM网络的网络进行基于业绩的比较,以弥补网络欺凌文本分类问题。结果显示,我们培训的具有65万参数的变压器大大地战胜了Res-CNN-BILSTM的性能,为更快的培训速度和更广泛的指标提供了4,882万参数。本文还比较了1维字符嵌入网络和100维手套嵌入网络与变压器。