Genre identification is a subclass of non-topical text classification. The main difference between this task and topical classification is that genres, unlike topics, usually do not correspond to simple keywords, and thus they need to be defined in terms of their functions in communication. Neural models based on pre-trained transformers, such as BERT or XLM-RoBERTa, demonstrate SOTA results in many NLP tasks, including non-topical classification. However, in many cases, their downstream application to very large corpora, such as those extracted from social media, can lead to unreliable results because of dataset shifts, when some raw texts do not match the profile of the training set. To mitigate this problem, we experiment with individual models as well as with their ensembles. To evaluate the robustness of all models we use a prediction confidence metric, which estimates the reliability of a prediction in the absence of a gold standard label. We can evaluate robustness via the confidence gap between the correctly classified texts and the misclassified ones on a labeled test corpus, higher gaps make it easier to improve our confidence that our classifier made the right decision. Our results show that for all of the classifiers tested in this study, there is a confidence gap, but for the ensembles, the gap is bigger, meaning that ensembles are more robust than their individual models.
翻译:基因识别是非专题文本分类的一个小分类。 任务和专题分类的主要区别在于,与主题不同,类型通常与简单关键词不同,通常不匹配,因此,它们需要按照通信功能来界定。 以BERT或XLM-ROBERTA等预先培训的变压器为基础的神经模型显示SOTA在许多NLP任务(包括非专题分类)中的结果。 但是,在许多情况下,它们下游应用到非常庞大的Corpora(例如从社交媒体提取的),可能会导致不可靠的结果,因为一些原始文本与培训成套内容不匹配。 为了缓解这一问题,我们试验了单个模型及其组合。要评价所有模型的稳健性,我们使用一种预测信任度指标来估计在没有黄金标准标签的情况下预测的可靠性。我们可以通过正确分类的文本与标签测试的测试集中错误分类的文本之间的信任差距来评价稳健性。 更大的差距使得我们更容易提高我们的信心,让我们的分类师们作出更稳健的决定。