We participate in the DSTC9 Interactive Dialogue Evaluation Track (Gunasekara et al. 2020) sub-task 1 (Knowledge Grounded Dialogue) and sub-task 2 (Interactive Dialogue). In sub-task 1, we employ a pre-trained language model to generate topic-related responses and propose a response ensemble method for response selection. In sub-task2, we propose a novel Dialogue Planning Model (DPM) to capture conversation flow in the interaction with humans. We also design an integrated open-domain dialogue system containing pre-process, dialogue model, scoring model, and post-process, which can generate fluent, coherent, consistent, and humanlike responses. We tie 1st on human ratings and also get the highest Meteor, and Bert-score in sub-task 1, and rank 3rd on interactive human evaluation in sub-task 2.
翻译:我们参加了DSTC9交互式对话评价轨道(Gunasekara等人,2020年)次级任务1(知识基础对话)和次级任务2(交互式对话),在次级任务1中,我们采用了预先培训的语言模式,以产生与专题有关的反应,并提出应对响应选择的组合方法。在次级任务2中,我们提议了一个新的对话规划模式(DPM),以捕捉与人类互动的对话流。我们还设计了一个综合开放对话系统,包括预处理、对话模式、评分模型和后处理,能够产生流畅、一致、一致和人性化的反应。我们把人类评级第一线绑在一起,并在子任务1中取得最高评分,在子任务1中取得Bert分数,在次任务2中取得互动式人类评价第三排位。