Fashion compatibility models enable online retailers to easily obtain a large number of outfit compositions with good quality. However, effective fashion recommendation demands precise service for each customer with a deeper cognition of fashion. In this paper, we conduct the first study on fashion cognitive learning, which is fashion recommendations conditioned on personal physical information. To this end, we propose a Fashion Cognitive Network (FCN) to learn the relationships among visual-semantic embedding of outfit composition and appearance features of individuals. FCN contains two submodules, namely outfit encoder and Multi-label Graph Neural Network (ML-GCN). The outfit encoder uses a convolutional layer to encode an outfit into an outfit embedding. The latter module learns label classifiers via stacked GCN. We conducted extensive experiments on the newly collected O4U dataset, and the results provide strong qualitative and quantitative evidence that our framework outperforms alternative methods.
翻译:时装兼容性模型使网上零售商能够方便地获得质量良好的大量装配成分。然而,有效的时装建议要求每个客户以更深的时装识别方式提供准确的服务。在本文中,我们进行了关于时装认知学习的第一次研究,这是以个人身体信息为条件的时装建议。为此,我们提议建立一个时装认知网络(FCN),以了解个人装配构成和外观特征的视觉-成形嵌入层之间的关系。FCN包含两个子模块,即装配编码器和多标签图示神经网络(ML-GCN)。服装编码器使用同源层将服装编码成一个嵌入式。后一个模块通过堆叠的GCN来学习标签分类器。我们在新收集的O4U数据集上进行了广泛的实验,结果提供了强有力的定性和定量证据,证明我们的框架超越了替代方法。