Recently, multimodal recommendations (MMR) have gained increasing attention for alleviating the data sparsity problem of traditional recommender systems by incorporating modality-based representations. Although MMR exhibit notable improvement in recommendation accuracy, we empirically validate that an increase in the quantity or variety of modalities leads to a higher degree of users' sensitive information leakage due to entangled causal relationships, risking fair representation learning. On the other hand, existing fair representation learning approaches are mostly based on the assumption that sensitive information is solely leaked from users' interaction data and do not explicitly model the causal relationships introduced by multimodal data, which limits their applicability in multimodal scenarios. Particularly, we disentangle biased and filtered modal embeddings inspired by causal inference techniques, enabling the mining of modality-based unfair and fair user-user relations, thereby enhancing the fairness and informativeness of user representations. By addressing the causal effects of sensitive attributes on user preferences, our approach aims to achieve counterfactual fairness in multimodal recommendations. Experiments on two public datasets demonstrate the superiority of our FMMRec relative to the state-of-the-art baselines. Our source code is available at https://github.com/WeixinChen98/FMMRec.
翻译:暂无翻译