Personalization of natural language generation plays a vital role in a large spectrum of tasks, such as explainable recommendation, review summarization and dialog systems. In these tasks, user and item IDs are important identifiers for personalization. Transformer, which is demonstrated with strong language modeling capability, however, is not personalized and fails to make use of the user and item IDs since the ID tokens are not even in the same semantic space as the words. To address this problem, we present a PErsonalized Transformer for Explainable Recommendation (PETER), on which we design a simple and effective learning objective that utilizes the IDs to predict the words in the target explanation, so as to endow the IDs with linguistic meanings and to achieve personalized Transformer. Besides generating explanations, PETER can also make recommendations, which makes it a unified model for the whole recommendation-explanation pipeline. Extensive experiments show that our small unpretrained model outperforms fine-tuned BERT on the generation task, in terms of both effectiveness and efficiency, which highlights the importance and the nice utility of our design.
翻译:自然语言生成的个性化在大量任务中发挥着关键作用,例如可解释的建议、审查总结和对话系统。在这些任务中,用户和项目识别码是个人化的重要识别码。以强大的语言建模能力显示的变异器没有个性化,也没有使用用户和项目识别码,因为ID代号甚至没有和字词一样的语义空间。为了解决这个问题,我们提出了一个PERSONAL化变异器,用于提出可解释的建议。我们为此设计了一个简单有效的学习目标,用ID来预测目标解释中的单词,使ID具有语言含义,实现个性化变异器。除提出解释外,PETER还可以提出建议,使它成为整个建议规划管道的统一模型。广泛的实验表明,我们的小型未受过训练的模型在效力和效率两方面都超越了生成任务的精细调整的BERT,这突出表明了我们设计的重要性和良好的效用。