Although recent neural conversation models have shown great potential, they often generate bland and generic responses. While various approaches have been explored to diversify the output of the conversation model, the improvement often comes at the cost of decreased relevance. In this paper, we propose a method to jointly optimize diversity and relevance that essentially fuses the latent space of a sequence-to-sequence model and that of an autoencoder model by leveraging novel regularization terms. As a result, our approach induces a latent space in which the distance and direction from the predicted response vector roughly match the relevance and diversity, respectively. This property also lends itself well to an intuitive visualization of the latent space. Both automatic and human evaluation results demonstrate that the proposed approach brings significant improvement compared to strong baselines in both diversity and relevance.
翻译:虽然最近的神经对话模式显示出巨大的潜力,但它们往往产生枯燥和通用的反应。虽然探索了多种方法使对话模式的产出多样化,但改进往往以降低相关性为代价。在本文件中,我们提出了一个方法,通过利用新颖的正规化术语,共同优化多样性和相关性,从根本上将序列到序列模型和自动编码模型的潜在空间融合起来。结果,我们的方法产生了一种潜在空间,使预测的反应矢量的距离和方向大致符合相关性和多样性。这一属性还有利于对潜在空间进行直观的视觉化。自动和人类的评价结果都表明,与多样性和相关性的强大基线相比,拟议方法带来了显著的改进。