User simulators (USs) are commonly used to train task-oriented dialogue systems (DSs) via reinforcement learning. The interactions often take place on semantic level for efficiency, but there is still a gap from semantic actions to natural language, which causes a mismatch between training and deployment environment. Incorporating a natural language generation (NLG) module with USs during training can partly deal with this problem. However, since the policy and NLG of USs are optimised separately, these simulated user utterances may not be natural enough in a given context. In this work, we propose a generative transformer-based user simulator (GenTUS). GenTUS consists of an encoder-decoder structure, which means it can optimise both the user policy and natural language generation jointly. GenTUS generates both semantic actions and natural language utterances, preserving interpretability and enhancing language variation. In addition, by representing the inputs and outputs as word sequences and by using a large pre-trained language model we can achieve generalisability in feature representation. We evaluate GenTUS with automatic metrics and human evaluation. Our results show that GenTUS generates more natural language and is able to transfer to an unseen ontology in a zero-shot fashion. In addition, its behaviour can be further shaped with reinforcement learning opening the door to training specialised user simulators.
翻译:用户模拟器(USs)通常用于通过强化学习来培训任务导向对话系统(DS) 。 互动通常在语义层次上进行, 以提高效率为目的, 但从语义行动到自然语言之间仍有差距, 这使得培训和部署环境不匹配。 在培训期间, 将天然语言生成模块( NLG) 与美国一起纳入, 可以部分解决这个问题。 但是, 由于美国的政策和 NLG 分别使用优化, 这些模拟用户语句在特定背景下可能不够自然。 在这项工作中, 我们建议使用基于基因变异变异器的用户模拟器( GenTUS ) 。 GentTUS 包含一个编码变异器- 解调器结构, 这意味着它可以同时优化用户政策和自然语言生成。 GentUS 生成语义动作( NLG) 和自然语言表达, 保存可解释性, 并加强语言变异性。 此外, 通过以文字顺序表示投入和产出, 并且使用一个大型的预先语言模型, 我们可以用功能化变异性表示( GentTUS), 我们用一个自动测试, 将一个更能的动作转换成一个自动测试。