Based on the recently proposed transferable dialogue state generator (TRADE) that predicts dialogue states from utterance-concatenated dialogue context, we propose a multi-task learning model with a simple yet effective utterance tagging technique and a bidirectional language model as an auxiliary task for task-oriented dialogue state generation. By enabling the model to learn a better representation of the long dialogue context, our approaches attempt to solve the problem that the performance of the baseline significantly drops when the input dialogue context sequence is long. In our experiments, our proposed model achieves a 7.03% relative improvement over the baseline, establishing a new state-of-the-art joint goal accuracy of 52.04% on the MultiWOZ 2.0 dataset.
翻译:根据最近提出的预测对话的可转移对话状态发电机(TRADE)预测,对话从发声对话的角度出发,我们建议采用多任务学习模式,采用简单而有效的发音标记技术和双向语言模式,作为面向任务的对话状态生成的辅助任务。通过让该模式更好地体现长期对话背景,我们的方法试图解决以下问题:当输入对话背景序列很长时,基线的性能会显著下降。在我们的实验中,我们提议的模型比基线实现了7.03%的相对改进,在多WOZ2.0数据集上建立了52.04%的最新联合目标精确度。