We study the problem of compositional zero-shot learning for object-attribute recognition. Prior works use visual features extracted with a backbone network, pre-trained for object classification and thus do not capture the subtly distinct features associated with attributes. To overcome this challenge, these studies employ supervision from the linguistic space, and use pre-trained word embeddings to better separate and compose attribute-object pairs for recognition. Analogous to linguistic embedding space, which already has unique and agnostic embeddings for object and attribute, we shift the focus back to the visual space and propose a novel architecture that can disentangle attribute and object features in the visual space. We use visual decomposed features to hallucinate embeddings that are representative for the seen and novel compositions to better regularize the learning of our model. Extensive experiments show that our method outperforms existing work with significant margin on three datasets: MIT-States, UT-Zappos, and a new benchmark created based on VAW. The code, models, and dataset splits are publicly available at https://github.com/nirat1606/OADis.
翻译:为了克服这一挑战,这些研究利用语言空间的监督,并使用预先训练的字嵌入来更好地区分和谱写属性对象对等。对语言嵌入空间的模拟,我们把重点移回到视觉空间,并提议一个能够分解视觉空间的属性和对象特征的新结构。我们用视觉分解的功能来嵌入那些代表视觉和新构件的幻觉,以更好地规范我们模型的学习。广泛的实验表明,我们的方法在三大数据集(麻省理工-国家、UT-Zapopos和基于VAW的新基准)上,大大超越了现有工作:麻省理工学院-国家、UT-Zapos和基于VAW的新基准。代码、模型和数据集分裂在https://github.com/ni6/ODIS公开提供。