This paper introduces the schemes of Team LingJing's experiments in NLPCC-2022-Shared-Task-4 Multi-modal Dialogue Understanding and Generation (MDUG). The MDUG task can be divided into two phases: multi-modal context understanding and response generation. To fully leverage the visual information for both scene understanding and dialogue generation, we propose the scene-aware prompt for the MDUG task. Specifically, we utilize the multi-tasking strategy for jointly modelling the scene- and session- multi-modal understanding. The visual captions are adopted to aware the scene information, while the fixed-type templated prompt based on the scene- and session-aware labels are used to further improve the dialogue generation performance. Extensive experimental results show that the proposed method has achieved state-of-the-art (SOTA) performance compared with other competitive methods, where we rank the 1-st in all three subtasks in this MDUG competition.
翻译:本文介绍了在NLPCC-2022-Shared-Task-4多模式对话理解和生成(MDUG)中LingJing小组的实验计划。MDUG的任务可以分为两个阶段:多模式背景理解和响应生成。为充分利用视觉信息促进现场理解和对话生成,我们建议MDUG的任务的现场认知快速。具体地说,我们利用多任务战略共同模拟现场和会场-多模式理解。我们采用了视觉字幕以了解现场信息,同时使用基于场景和会场认知标签的固定型样样的提示来进一步改进生成对话的绩效。广泛的实验结果显示,与其它竞争方法相比,拟议方法已经取得了最先进的(SOTA)性能,我们在这场MDUG竞赛中在所有三个子任务中排第一位。