Multi-party multi-turn dialogue comprehension brings unprecedented challenges on handling the complicated scenarios from multiple speakers and criss-crossed discourse relationship among speaker-aware utterances. Most existing methods deal with dialogue contexts as plain texts and pay insufficient attention to the crucial speaker-aware clues. In this work, we propose an enhanced speaker-aware model with masking attention and heterogeneous graph networks to comprehensively capture discourse clues from both sides of speaker property and speaker-aware relationships. With such comprehensive speaker-aware modeling, experimental results show that our speaker-aware model helps achieves state-of-the-art performance on the benchmark dataset Molweni. Case analysis shows that our model enhances the connections between utterances and their own speakers and captures the speaker-aware discourse relations, which are critical for dialogue modeling.
翻译:多党多方向对话理解在处理来自多个发言者的复杂情景和发言者对发言有认识的泛泛对话关系方面带来了前所未有的挑战。大多数现有方法将对话背景作为简单的文本处理,对关键的发言者对线索重视不够。在这项工作中,我们提议一个强化的发言者意识模式,以掩盖注意力和多式图表网络,全面捕捉发言者财产和发言者对立关系两侧的谈话线索。通过这种全面的发言者意识建模,实验结果显示,我们的发言者意识模式有助于在基准数据集Molweni上取得最先进的业绩。案例研究分析表明,我们的模式加强了演讲与其发言者之间的联系,并捕捉了对对话建模至关重要的发言者认识对话关系。