Extreme multi-label text classification (XMC) seeks to find relevant labels from an extreme large label collection for a given text input. Many real-world applications can be formulated as XMC problems, such as recommendation systems, document tagging and semantic search. Recently, transformer based XMC methods, such as X-Transformer and LightXML, have shown significant improvement over other XMC methods. Despite leveraging pre-trained transformer models for text representation, the fine-tuning procedure of transformer models on large label space still has lengthy computational time even with powerful GPUs. In this paper, we propose a novel recursive approach, XR-Transformer to accelerate the procedure through recursively fine-tuning transformer models on a series of multi-resolution objectives related to the original XMC objective function. Empirical results show that XR-Transformer takes significantly less training time compared to other transformer-based XMC models while yielding better state-of-the-art results. In particular, on the public Amazon-3M dataset with 3 million labels, XR-Transformer is not only 20x faster than X-Transformer but also improves the Precision@1 from 51% to 54%.
翻译:极端多标签文本分类 (XMC) 试图从一个极大的文本输入的极大标签收藏中找到相关的标签。 许多真实世界应用程序可以被设计成 XMC 问题, 如建议系统、文档标记和语义搜索。 最近, 以变压器为基础的 XMC 方法, 如X- Transfer 和 LightXML 等, 与其他 XMC 方法相比, 有了显著的改进。 尽管在文本代表方面利用了预先训练的变压器模型模型, 大标签空间变压器模型的微调程序仍然有很长的计算时间, 即使有强大的 GPPS 。 在本文中, 我们提议一种新颖的递转方法, XR- Transformination, 以通过与原 XMC 目标函数相关的一系列多分辨率目标递增调变压器变压器模型加速程序。 彩色结果表明, XR- Transmext 与其它以变压器为基础的 XMC 模型相比, 需要大量的培训时间, 同时产生更好的状态- 。 。 特别是, 在公共 Amazon-3M 3百万 标签上, XR- Transforent 不仅比 X- 5% 1 还要快20x 更快, 更快于 X- 51%