Large language models (LLMs) have achieved impressive zero-shot performance in various natural language processing (NLP) tasks, demonstrating their capabilities for inference without training examples. Despite their success, no research has yet explored the potential of LLMs to perform next-item recommendations in the zero-shot setting. We have identified two major challenges that must be addressed to enable LLMs to act effectively as recommenders. First, the recommendation space can be extremely large for LLMs, and LLMs do not know about the target user's past interacted items and preferences. To address this gap, we propose a prompting strategy called Zero-Shot Next-Item Recommendation (NIR) prompting that directs LLMs to make next-item recommendations. Specifically, the NIR-based strategy involves using an external module to generate candidate items based on user-filtering or item-filtering. Our strategy incorporates a 3-step prompting that guides GPT-3 to carry subtasks that capture the user's preferences, select representative previously watched movies, and recommend a ranked list of 10 movies. We evaluate the proposed approach using GPT-3 on MovieLens 100K dataset and show that it achieves strong zero-shot performance, even outperforming some strong sequential recommendation models trained on the entire training dataset. These promising results highlight the ample research opportunities to use LLMs as recommenders. The code can be found at https://github.com/AGI-Edgerunners/LLM-Next-Item-Rec.
翻译:大型语言模型(LLM)在各种自然语言处理(NLP)任务中实现了令人印象深刻的零样本表现,展示了它们在没有训练示例的情况下进行推理的能力。尽管它们的成功,但尚未有研究探讨LLM在零样本情境下执行下一个项目推荐的潜力。我们发现必须解决两个主要难题才能使LLMs作为推荐者有效地工作。首先,LLMs的推荐空间可能非常大,而且LLMs不知道目标用户的过去互动的项目和偏好。为了解决这个问题,我们提出了一种提示策略,称为零样本下一个项目推荐(NIR)提示,它指导LLMs进行下一个项目的推荐。具体而言,基于用户筛选或项目筛选,NIR策略涉及使用外部模块生成候选项目。我们的策略采用3个步骤的提示,指导GPT-3执行捕获用户偏好的子任务,选择具有代表性的先前观看的电影,并推荐10部电影的排名列表。我们在MovieLens 100K数据集上使用GPT-3评估了所提出的方法,并表明它实现了强大的零样本性能,甚至优于一些在整个训练数据集上训练的强序列推荐模型。这些有前途的结果突显了使用LLMs作为推荐者的丰富研究机会。代码可以在https://github.com/AGI-Edgerunners/LLM-Next-Item-Rec中找到。