Extreme classification tasks are multi-label tasks with an extremely large number of labels (tags). These tasks are hard because the label space is usually (i) very large, e.g. thousands or millions of labels, (ii) very sparse, i.e. very few labels apply to each input document, and (iii) highly correlated, meaning that the existence of one label changes the likelihood of predicting all other labels. In this work, we propose a self-attention based variational encoder-model to extract the label-label and label-feature dependencies jointly and to predict labels for a given input. In more detail, we propose a non-autoregressive latent variable model and compare it to a strong autoregressive baseline that predicts a label based on all previously generated labels. Our model can therefore be used to predict all labels in parallel while still including both label-label and label-feature dependencies through latent variables, and compares favourably to the autoregressive baseline. We apply our models to four standard extreme classification natural language data sets, and one news videos dataset for automated label detection from a lexicon of semantic concepts. Experimental results show that although the autoregressive models, where use a given order of the labels for chain-order label prediction, work great for the small scale labels or the prediction of the highly ranked label, but our non-autoregressive model surpasses them by around 2% to 6% when we need to predict more labels, or the dataset has a larger number of the labels.
翻译:极端分类任务是多标签任务, 标签数量极多( 标签) 。 这些任务很困难, 因为标签空间通常 (一) 非常大, 例如千个或百万个标签, (二) 非常稀少, 也就是说, 每个输入文档都适用很少的标签, 以及 (三) 高度关联, 这意味着一个标签的存在会改变预测所有其他标签的可能性。 在这项工作中, 我们提议一个基于自我注意的变异编码模型, 以提取标签标签标签标签和标签特点依赖性, 并共同预测给定输入的标签。 更详细地说, 我们建议了一个不向上向上移动的潜在变量模型, 并将其与一个强大的自动递增基准进行比较。 因此, 我们的模型可以用来预测所有标签同时同时预测所有其他标签, 同时通过隐含隐含变量的标签和标签依赖性, 并且与自动递增基准比较。 我们用模型来四个标准的极端的自然语言分类数据集, 并且用一个不向下流的图像视频显示一个非递增的基数据模型, 在自动标签的等级模型中, 将一个从高额标签标签标签标签等级的排序中, 显示一个磁性模型显示一个磁级模型, 。