Learning big models and then transfer has become the de facto practice in computer vision (CV) and natural language processing (NLP). However, such unified paradigm is uncommon for recommender systems (RS). A critical issue that hampers this is that standard recommendation models are built on unshareable identity data, where both users and their interacted items are represented by unique IDs. In this paper, we study a novel scenario where user's interaction feedback involves mixture-of-modality (MoM) items. We present TransRec, a straightforward modification done on the popular ID-based RS framework. TransRec directly learns from MoM feedback in an end-to-end manner, and thus enables effective transfer learning under various scenarios without relying on overlapped users or items. We empirically study the transferring ability of TransRec across four different real-world recommendation settings. Besides, we study its effects by scaling the size of source and target data. Our results suggest that learning recommenders from MoM feedback provides a promising way to realize universal recommender systems. Our code and datasets will be made available.
翻译:学习大模型,然后转移成为计算机视觉(CV)和自然语言处理(NLP)中的实际做法。然而,这种统一模式对于推荐人系统来说并不常见。 妨碍这一点的一个关键问题是标准建议模式建在不可分配的身份数据上,用户及其互动项目都使用独特的身份识别。 在本文中,我们研究一种新颖的设想,即用户的互动反馈涉及混合的现代(MMM)项目。我们介绍了TransRec,这是对流行的基于身份识别的RS框架所作的直接修改。 TransRec以端到端的方式直接学习MOM的反馈,从而能够在各种情景下有效地进行转移学习,而不必依赖重叠的用户或项目。我们从经验上研究TransRec在四个不同的现实世界建议环境中的转移能力。此外,我们通过扩大源和目标数据的规模来研究其影响。我们的研究结果表明,从IM反馈中学习建议者提供了实现普遍推荐系统的一个有希望的途径。我们的代码和数据集将会被提供。