In natural language processing, extreme multi-label text classification is an emerging but essential task. The problem of extreme multi-label text classification (XMTC) is to recall some of the most relevant labels for a text from an extremely large label set. Large-scale pre-trained models have brought a new trend to this problem. Though the large-scale pre-trained models have made significant achievements on this problem, the valuable fine-tuned methods have yet to be studied. Though label semantics have been introduced in XMTC, the vast semantic gap between texts and labels has yet to gain enough attention. This paper builds a new guide network (GUDN) to help fine-tune the pre-trained model to instruct classification later. Furthermore, GUDN uses raw label semantics combined with a helpful label reinforcement strategy to effectively explore the latent space between texts and labels, narrowing the semantic gap, which can further improve predicted accuracy. Experimental results demonstrate that GUDN outperforms state-of-the-art methods on Eurlex-4k and has competitive results on other popular datasets. In an additional experiment, we investigated the input lengths' influence on the Transformer-based model's accuracy. Our source code is released at https://t.hk.uy/aFSH.
翻译:在自然语言处理中,极端的多标签文本分类是一项新兴但至关重要的任务。极端的多标签文本分类(XMTC)问题在于从一个极大的标签组中为文本回顾一些最相关的标签。大型的预先培训模型带来了这一问题的新趋势。尽管大规模预先培训的模型已经在这个问题上取得了重大成就,但是尚未研究宝贵的精细调整方法。尽管在XMTC中引入了标签语义学,但文本和标签之间巨大的语义差距尚未引起足够的注意。本文建立了一个新的指导网络(GUDN),帮助微调预先培训过的模型,以指导以后的分类。此外,GUDN使用原始标签语义学,加上一个有用的标签强化战略,以有效探索文本和标签之间的潜在空间,缩小语义差距,从而可以进一步提高预测的准确性。实验结果表明,GUDN在Eurlex-4k上超越了状态-艺术方法,并在其他基于大众模型的模型上具有竞争性结果。在另一个实验中,我们使用原始的输入源码/输入源影响。