There are a plethora of methods and algorithms that solve the classical multi-label document classification. However, when it comes to deployment and usage in an industry setting, most, if not all the contemporary approaches fail to address some of the vital aspects or requirements of an ideal solution: i. ability to operate on variable-length texts and rambling documents. ii. catastrophic forgetting problem. iii. modularity when it comes to online learning and updating the model. iv. ability to spotlight relevant text while producing the prediction, i.e. visualizing the predictions. v. ability to operate on imbalanced or skewed datasets. vi. scalability. The paper describes the significance of these problems in detail and proposes a unique neural network architecture that addresses the above problems. The proposed architecture views documents as a sequence of sentences and leverages sentence-level embeddings for input representation. A hydranet-like architecture is designed to have granular control over and improve the modularity, coupled with a weighted loss driving task-specific heads. In particular, two specific mechanisms are compared: Bi-LSTM and Transformer-based. The architecture is benchmarked on some of the popular benchmarking datasets such as Web of Science - 5763, Web of Science - 11967, BBC Sports, and BBC News datasets. The experimental results reveal that the proposed model outperforms the existing methods by a substantial margin. The ablation study includes comparisons of the impact of the attention mechanism and the application of weighted loss functions to train the task-specific heads in the hydranet.
翻译:解决经典多标签文件分类的方法和算法繁多,然而,当涉及行业环境中的部署和使用时,大多数甚至所有当代方法都未能解决理想解决方案的一些关键方面或要求:一是能够以可变长文本和粗略文件操作。二是灾难性的遗忘问题。三是,当涉及在线学习和更新模型时,模块化问题。四是能够突出相关文本,同时作出预测,即可视化预测。五是水力,在不平衡或扭曲数据集上操作的能力。vi.可缩放性。本文详细描述了这些问题的重要性,并提出了解决上述问题的独特神经网络结构。拟议架构将文件视为句号的顺序,并利用句级嵌入层来代表输入。类似水力结构的设计旨在对模型模块化进行颗粒控制并改进模型性,同时以加权损失率推算特定任务头。特别是将两个具体机制加以比较:Bi-LSTM和变换式数据集的影响,并提出了解决上述问题的独特的神经网络网络网络网络网络网络网络网络网络网络网络功能,该结构将一些数据库的模型用于基准化。