Using personalized explanations to support recommendations has been shown to increase trust and perceived quality. However, to actually obtain better recommendations, there needs to be a means for users to modify the recommendation criteria by interacting with the explanation. We present a novel technique using aspect markers that learns to generate personalized explanations of recommendations from review texts, and we show that human users significantly prefer these explanations over those produced by state-of-the-art techniques. Our work's most important innovation is that it allows users to react to a recommendation by critiquing the textual explanation: removing (symmetrically adding) certain aspects they dislike or that are no longer relevant (symmetrically that are of interest). The system updates its user model and the resulting recommendations according to the critique. This is based on a novel unsupervised critiquing method for single- and multi-step critiquing with textual explanations. Experiments on two real-world datasets show that our system is the first to achieve good performance in adapting to the preferences expressed in multi-step critiquing.
翻译:使用个性化的解释来支持建议被证明提高了信任度和感知质量。然而,为了真正获得更好的建议,需要为用户提供一种手段,通过与解释互动来修改建议标准。我们展示了一种新颖的技术,使用侧面标记,学会对审查文本中的建议进行个性化的解释,我们显示,人类用户非常喜欢这些解释,而不是由最新技术产生的解释。我们工作最重要的创新是,它允许用户对建议作出反应,对文本解释进行轻描淡写:删除(对称添加)他们不喜欢或不再相关的某些方面(对称是感兴趣的方面),系统更新其用户模式,并根据批评意见更新由此产生的建议。这个系统基于一种新颖的、没有超过的单步和多步曲解方法,加上文字解释。对两个真实世界数据集的实验表明,我们的系统是第一个在适应多步尖刻表达的偏好方面取得良好业绩的系统。