Personalized outfit recommendation has recently been in the spotlight with the rapid growth of the online fashion industry. However, recommending outfits has two significant challenges that should be addressed. The first challenge is that outfit recommendation often requires a complex and large model that utilizes visual information, incurring huge memory and time costs. One natural way to mitigate this problem is to compress such a cumbersome model with knowledge distillation (KD) techniques that leverage knowledge from a pretrained teacher model. However, it is hard to apply existing KD approaches in recommender systems (RS) to the outfit recommendation because they require the ranking of all possible outfits while the number of outfits grows exponentially to the number of consisting clothing items. Therefore, we propose a new KD framework for outfit recommendation, called False Negative Distillation (FND), which exploits false-negative information from the teacher model while not requiring the ranking of all candidates. The second challenge is that the explosive number of outfit candidates amplifying the data sparsity problem, often leading to poor outfit representation. To tackle this issue, inspired by the recent success of contrastive learning (CL), we introduce a CL framework for outfit representation learning with two proposed data augmentation methods. Quantitative and qualitative experiments on outfit recommendation datasets demonstrate the effectiveness and soundness of our proposed methods.
翻译:最近,随着在线时装行业的迅速发展,个人服装建议成为人们关注的焦点。然而,建议服装建议有两大挑战需要解决。第一个挑战是,服装建议往往需要一个复杂和大型的模型,利用视觉信息,产生巨大的记忆和时间成本。缓解这一问题的一个自然方法就是压缩这样一个繁琐的模式,利用知识蒸馏(KD)技术,利用预先培训的教师模式的知识。然而,很难在建议系统(RS)中应用现有的KD方法,因为建议系统需要对所有可能的服装进行排名,而服装数量则激增到包括服装项目的数量。因此,我们提出了一个新的KD框架,用于服装建议,称为“虚假否定蒸馏”(FND),它利用教师模式的虚假否定信息,而不需要所有候选人的排名。第二个挑战是,在推荐者中,装配装配的爆炸性人数,加剧了数据紧张性问题,往往导致缺乏适当的代表性。在对比学习成功(CL)的启发下,处理该问题。我们提出了一个新的KD框架,用于服装建议,即利用拟议的数据质量模型,以两种数据升级方法学习。