Text classification is a fundamental Natural Language Processing task that has a wide variety of applications, where deep learning approaches have produced state-of-the-art results. While these models have been heavily criticized for their black-box nature, their robustness to slight perturbations in input text has been a matter of concern. In this work, we carry out a data-focused study evaluating the impact of systematic practical perturbations on the performance of the deep learning based text classification models like CNN, LSTM, and BERT-based algorithms. The perturbations are induced by the addition and removal of unwanted tokens like punctuation and stop-words that are minimally associated with the final performance of the model. We show that these deep learning approaches including BERT are sensitive to such legitimate input perturbations on four standard benchmark datasets SST2, TREC-6, BBC News, and tweet_eval. We observe that BERT is more susceptible to the removal of tokens as compared to the addition of tokens. Moreover, LSTM is slightly more sensitive to input perturbations as compared to CNN based model. The work also serves as a practical guide to assessing the impact of discrepancies in train-test conditions on the final performance of models.
翻译:自然语言文本分类是一项基本的自然语言处理任务,具有各种各样的应用,深层次的学习方法产生了最先进的结果。虽然这些模型因其黑盒性质而备受批评,但其对输入文本轻微扰动的坚韧性一直是一个令人关切的问题。在这项工作中,我们开展了一项以数据为重点的研究,评价系统、实际的干扰对有线新闻网、LSTM和BERT等深层次基于学习的文本分类模型的性能的影响。由于添加和删除了与模型最后性能关系最小的不受欢迎的标语,例如标语和断语,从而引起了混乱。我们显示这些深度学习方法,包括BERT对四个标准基准数据集SST2、TREC-6、BBC News和Twitter_eval的这种合理性扰动非常敏感。我们注意到,与添加的标语相比,BERT更容易去除标语。此外,LSTM与CNN模型的最后性能模型相比,对输入的干扰性能影响略微。我们还评估了培训模型的最后性能差异。