Conversational machine reading (CMR) requires machines to communicate with humans through multi-turn interactions between two salient dialogue states of decision making and question generation processes. In open CMR settings, as the more realistic scenario, the retrieved background knowledge would be noisy, which results in severe challenges in the information transmission. Existing studies commonly train independent or pipeline systems for the two subtasks. However, those methods are trivial by using hard-label decisions to activate question generation, which eventually hinders the model performance. In this work, we propose an effective gating strategy by smoothing the two dialogue states in only one decoder and bridge decision making and question generation to provide a richer dialogue state reference. Experiments on the OR-ShARC dataset show the effectiveness of our method, which achieves new state-of-the-art results.
翻译:连通机器阅读(CMR)要求机器通过两个显著的决策对话状态和问题生成过程之间的多方向互动与人类沟通。在公开的CMR环境中,作为更现实的设想,检索到的背景知识会吵闹,从而导致信息传输的严峻挑战。现有研究通常为两个子任务培训独立或管道系统。然而,这些方法微不足道,因为使用硬标签决定来启动问题生成,最终会阻碍示范性工作。在这项工作中,我们建议了一种有效的定位战略,即只用一个解码器和桥梁决策以及问题生成来平滑两个对话国家,以提供一个更丰富的对话状态参考。对OR-SHARC数据集的实验显示了我们方法的有效性,从而实现新的最新结果。