In collaborative filtering (CF) algorithms, the optimal models are usually learned by globally minimizing the empirical risks averaged over all the observed data. However, the global models are often obtained via a performance tradeoff among users/items, i.e., not all users/items are perfectly fitted by the global models due to the hard non-convex optimization problems in CF algorithms. Ensemble learning can address this issue by learning multiple diverse models but usually suffer from efficiency issue on large datasets or complex algorithms. In this paper, we keep the intermediate models obtained during global model learning as the snapshot models, and then adaptively combine the snapshot models for individual user-item pairs using a memory network-based method. Empirical studies on three real-world datasets show that the proposed method can extensively and significantly improve the accuracy (up to 15.9% relatively) when applied to a variety of existing collaborative filtering methods.
翻译:在合作过滤算法中,最佳模型通常是通过在全球范围内最大限度地减少所有观测数据的平均经验风险来学习的,然而,全球模型往往是通过用户/项目之间的性能权衡而获得的,即,由于CF算法中硬性的非混凝土优化问题,并非所有用户/项目都完全适合全球模型。 组合学习可以通过学习多种不同的模型来解决这一问题,但通常在大型数据集或复杂算法中存在效率问题。 在本文中,我们将全球模型学习期间获得的中间模型保留为快照模型,然后采用记忆网络方法将单个用户-项目对子的快照模型进行适应性合并。 关于三个真实世界数据集的经验研究表明,在应用到各种现有的协作过滤方法时,拟议方法可以广泛和显著地提高准确性(相对提高至15.9%)。