Sentiment analysis is an important task in the field ofNature Language Processing (NLP), in which users' feedbackdata on a specific issue are evaluated and analyzed. Manydeep learning models have been proposed to tackle this task, including the recently-introduced Bidirectional Encoder Rep-resentations from Transformers (BERT) model. In this paper,we experiment with two BERT fine-tuning methods for thesentiment analysis task on datasets of Vietnamese reviews: 1) a method that uses only the [CLS] token as the input for anattached feed-forward neural network, and 2) another methodin which all BERT output vectors are used as the input forclassification. Experimental results on two datasets show thatmodels using BERT slightly outperform other models usingGloVe and FastText. Also, regarding the datasets employed inthis study, our proposed BERT fine-tuning method produces amodel with better performance than the original BERT fine-tuning method.
翻译:感官分析是自然语言处理(NLP)领域的一项重要任务,其中评估和分析用户对具体问题的反馈数据。提出了许多深入的学习模式来完成这项任务,包括最近从变异器(BERT)模型中引入的双向编码器复文。在本文中,我们试验了两个BERT微调方法,用于越南审查数据集的送文分析任务:1)这种方法只使用[CLS]符号作为附加的进料向神经网络的输入,2)另一种方法,所有BERT输出矢量都用作分类输入。两个数据集的实验结果表明,使用BERT的模型略优于使用GloVe和FastText的其他模型。此外,关于本研究中使用的数据集,我们提议的BERT微调法产生了一种比原始的BERT微调法更好的模型。