The task of multi-label image classification is to recognize all the object labels presented in an image. Though advancing for years, small objects, similar objects and objects with high conditional probability are still the main bottlenecks of previous convolutional neural network(CNN) based models, limited by convolutional kernels' representational capacity. Recent vision transformer networks utilize the self-attention mechanism to extract the feature of pixel granularity, which expresses richer local semantic information, while is insufficient for mining global spatial dependence. In this paper, we point out the three crucial problems that CNN-based methods encounter and explore the possibility of conducting specific transformer modules to settle them. We put forward a Multi-label Transformer architecture(MlTr) constructed with windows partitioning, in-window pixel attention, cross-window attention, particularly improving the performance of multi-label image classification tasks. The proposed MlTr shows state-of-the-art results on various prevalent multi-label datasets such as MS-COCO, Pascal-VOC, and NUS-WIDE with 88.5%, 95.8%, and 65.5% respectively. The code will be available soon at https://github.com/starmemda/MlTr/
翻译:多标签图像分类的任务是识别图像中显示的所有对象标签。 虽然多年来不断进步,但小型物体、类似对象和有条件概率高的物体仍然是先前以进化神经网络为基础的模型的主要瓶颈,但受进化内核代表能力的限制。 最近的视觉变压器网络利用自我注意机制来提取像素颗粒的特性, 显示更丰富的本地语义信息, 而不足以挖掘全球空间依赖性。 在本文中, 我们指出了CNN使用的方法遇到的三个关键问题, 并探索了使用特定变压器模块解决它们的可能性。 我们推出了一个多标签变压器结构(MLTr), 以窗口分隔、 在风中像素关注、 横风中关注, 特别是改善多标签图像分类任务的性能。 拟议的 MlTr 显示各种流行多标签数据集( 如 MS- COCO、 Pascal- VOC 和 NUS-WIDE) 的状态, 将很快可用 MA5. 5/Mm 代码 。