Many tasks in natural language processing can be viewed as multi-label classification problems. However, most of the existing models are trained with the standard cross-entropy loss function and use a fixed prediction policy (e.g., a threshold of 0.5) for all the labels, which completely ignores the complexity and dependencies among different labels. In this paper, we propose a meta-learning method to capture these complex label dependencies. More specifically, our method utilizes a meta-learner to jointly learn the training policies and prediction policies for different labels. The training policies are then used to train the classifier with the cross-entropy loss function, and the prediction policies are further implemented for prediction. Experimental results on fine-grained entity typing and text classification demonstrate that our proposed method can obtain more accurate multi-label classification results.
翻译:自然语言处理中的许多任务可被视为多标签分类问题,但是,大多数现有模型都经过标准跨作物流失功能的培训,对所有标签都采用固定的预测政策(例如0.5的门槛值),完全无视不同标签的复杂性和依赖性。在本文件中,我们提出一种元学习方法,以捕捉这些复杂的标签依赖性。更具体地说,我们的方法使用一个元精头来联合学习不同标签的培训政策和预测政策。然后,培训政策用于培训分类员,使其掌握交叉作物流失功能,预测政策进一步用于预测。微粒实体打字和文本分类的实验结果表明,我们拟议的方法可以取得更准确的多标签分类结果。