Textual explanations have proved to help improve user satisfaction on machine-made recommendations. However, current mainstream solutions loosely connect the learning of explanation with the learning of recommendation: for example, they are often separately modeled as rating prediction and content generation tasks. In this work, we propose to strengthen their connection by enforcing the idea of sentiment alignment between a recommendation and its corresponding explanation. At training time, the two learning tasks are joined by a latent sentiment vector, which is encoded by the recommendation module and used to make word choices for explanation generation. At both training and inference time, the explanation module is required to generate explanation text that matches sentiment predicted by the recommendation module. Extensive experiments demonstrate our solution outperforms a rich set of baselines in both recommendation and explanation tasks, especially on the improved quality of its generated explanations. More importantly, our user studies confirm our generated explanations help users better recognize the differences between recommended items and understand why an item is recommended.
翻译:事实证明,文字解释有助于提高用户对机制建议的满意度。然而,目前的主流解决方案将学习解释与学习建议不相适应:例如,它们往往被分别作为评级预测和内容生成任务的模式。在这项工作中,我们提议加强它们之间的联系,在建议及其相应解释之间执行感知一致的想法。在培训时间,两项学习任务被隐含的情感矢量结合在一起,它由建议模块编码,用于为解释生成作出文字选择。在培训和推断时间,解释模块需要生成与建议模块预测的感知相匹配的解释文本。广泛的实验表明,我们的解决办法超越了建议和解释任务中的丰富基线,特别是在改进解释质量方面。更重要的是,我们的用户研究确认我们产生的解释有助于用户更好地认识建议项目之间的差异,并理解为何建议项目。