Persona-based dialogue systems aim to generate consistent responses based on historical context and predefined persona. Unlike conventional dialogue generation, the persona-based dialogue needs to consider both dialogue context and persona, posing a challenge for coherent training. Specifically, this requires a delicate weight balance between context and persona. To achieve that, in this paper, we propose an effective framework with Persona-Adaptive Attention (PAA), which adaptively integrates the weights from the persona and context information via our designed attention. In addition, a dynamic masking mechanism is applied to the PAA to not only drop redundant information in context and persona but also serve as a regularization mechanism to avoid overfitting. Experimental results demonstrate the superiority of the proposed PAA framework compared to the strong baselines in both automatic and human evaluation. Moreover, the proposed PAA approach can perform equivalently well in a low-resource regime compared to models trained in a full-data setting, which achieve a similar result with only 20% to 30% of data compared to the larger models trained in the full-data setting. To fully exploit the effectiveness of our design, we designed several variants for handling the weighted information in different ways, showing the necessity and sufficiency of our weighting and masking designs.
翻译:基于人物的对话系统旨在根据历史上下文和预定义的人物生成一致的响应。与传统的对话生成不同,基于人物的对话需要考虑对话上下文和人物两个方面的因素,这对于进行一致训练构成了挑战。具体而言,这需要在上下文和人物之间进行细微的权重平衡。为了实现这一目标,本文提出了一个有效的框架,采用人物自适应注意力(PAA),通过我们设计的注意力,自适应地整合来自人物和上下文信息的权重。此外,采用动态屏蔽机制来控制PAA,不仅能舍弃上下文和人物中的冗余信息,还可以作为一种规范化机制来避免过拟合。实验结果表明,所提出的PAA框架相对于强基线模型在自动和人类评估方面表现卓越。此外,与在全数据环境下训练的大型模型相比,所提出的PAA方法在低资源环境中表现同样出色,只使用20%至30%的数据即可实现与全数据设置下训练的大型模型相似的结果。为了充分利用我们设计的有效性,我们设计了几个处理加权信息的变体,表明我们加权和屏蔽设计的必要性和充分性。