The online emergence of multi-modal sharing platforms (eg, TikTok, Youtube) is powering personalized recommender systems to incorporate various modalities (eg, visual, textual and acoustic) into the latent user representations. While existing works on multi-modal recommendation exploit multimedia content features in enhancing item embeddings, their model representation capability is limited by heavy label reliance and weak robustness on sparse user behavior data. Inspired by the recent progress of self-supervised learning in alleviating label scarcity issue, we explore deriving self-supervision signals with effectively learning of modality-aware user preference and cross-modal dependencies. To this end, we propose a new Multi-Modal Self-Supervised Learning (MMSSL) method which tackles two key challenges. Specifically, to characterize the inter-dependency between the user-item collaborative view and item multi-modal semantic view, we design a modality-aware interactive structure learning paradigm via adversarial perturbations for data augmentation. In addition, to capture the effects that user's modality-aware interaction pattern would interweave with each other, a cross-modal contrastive learning approach is introduced to jointly preserve the inter-modal semantic commonality and user preference diversity. Experiments on real-world datasets verify the superiority of our method in offering great potential for multimedia recommendation over various state-of-the-art baselines. The implementation is released at: https://github.com/HKUDS/MMSSL.
翻译:多模态共享平台的兴起(例如TikTok、Youtube)推动个性化推荐系统融合了各种数据形式(例如视觉、文本和声音)到潜在用户表示中。然而,现有的多模态推荐方法通过利用多媒体内容特征增强项目嵌入,但其模型表征能力受到已有的标签依赖度和对稀疏用户行为数据的弱鲁棒性的限制。受最近自监督学习在减轻标签缺乏问题方面的进展启发,我们探索利用有效地学习模态感知用户偏好和跨模态依赖的自监督学习模型。为此,我们提出了一种新的多模态自监督学习(MMSSL)方法,解决了两个关键问题。具体而言,为了表征用户-项目协作视图和项目多模态语义视图之间的相互依赖性,我们设计了一种模态感知的交互结构学习范式,通过对数据进行对抗扰动来实现数据增强。此外,为了捕获用户的模态感知交互模式互相关织的效应,引入交叉模态对比学习方法,共同保持交叉模态语义共同性和用户偏好多样性。在真实数据集上的实验验证了我们的模型在多媒体推荐方面具有优越的潜在性,超过了各种最先进的基线模型。该方法的实现已在https://github.com/HKUDS/MMSSL上发布。