The advent of Large Language Models (LLMs) has revolutionized product recommenders, yet their susceptibility to adversarial manipulation poses critical challenges, particularly in real-world commercial applications. Our approach is the first one to tap into human psychological principles, seamlessly modifying product descriptions, making such manipulations hard to detect. In this work, we investigate cognitive biases as black-box adversarial strategies, drawing parallels between their effects on LLMs and human purchasing behavior. Through extensive evaluation across models of varying scale, we find that certain biases, such as social proof, consistently boost product recommendation rate and ranking, while others, like scarcity and exclusivity, surprisingly reduce visibility. Our results demonstrate that cognitive biases are deeply embedded in state-of-the-art LLMs, leading to highly unpredictable behavior in product recommendations and posing significant challenges for effective mitigation.
翻译:大型语言模型(LLM)的出现彻底改变了产品推荐系统,然而其对对抗性操纵的易感性带来了严峻挑战,尤其是在现实商业应用中。我们的方法是首个利用人类心理学原理,无缝修改产品描述,使此类操纵难以被察觉的工作。本研究将认知偏差作为黑盒对抗策略进行探究,并比较了其对LLM与人类购买行为的相似影响。通过对不同规模模型的广泛评估,我们发现某些偏差(如社会认同)能持续提升产品的推荐率与排名,而其他偏差(如稀缺性与专属性)却意外降低了产品可见度。研究结果表明,认知偏差已深度嵌入最先进的LLM中,导致产品推荐行为高度不可预测,并为有效缓解策略带来了重大挑战。