The next step for intelligent dialog agents is to escape their role as silent bystanders and become proactive. Well-defined proactive behavior may improve human-machine cooperation, as the agent takes a more active role during interaction and takes off responsibility from the user. However, proactivity is a double-edged sword because poorly executed pre-emptive actions may have a devastating effect not only on the task outcome but also on the relationship with the user. For designing adequate proactive dialog strategies, we propose a novel approach including both social as well as task-relevant features in the dialog. Here, the primary goal is to optimize proactive behavior so that it is task-oriented - this implies high task success and efficiency - while also being socially effective by fostering user trust. Including both aspects in the reward function for training a proactive dialog agent using reinforcement learning showed the benefit of our approach for more successful human-machine cooperation.
翻译:智能对话代理人的下一个步骤是摆脱他们作为静默旁观者的角色,并变得积极主动。定义明确的主动行为可以改善人类机器合作,因为代理人在互动中扮演更积极的作用,并承担用户的责任。然而,主动性是一种双刃剑,因为执行不力的先发制人行动不仅会对任务结果,而且会对与用户的关系产生毁灭性影响。为了设计适当的主动对话战略,我们建议一种新颖的方法,包括对话中的社会和任务相关特点。在这里,我们的首要目标是优化主动行为,以便它面向任务,这意味着任务的成功和效率很高,同时通过培养用户信任来提高社会效率。在培训积极主动的对话代理人的奖励功能中包括两个方面,利用强化学习来培训积极主动的对话代理人,这显示了我们的方法对于更成功的人类机器合作的好处。