Customer support via chat requires agents to resolve customer queries with minimum wait time and maximum customer satisfaction. Given that the agents as well as the customers can have varying levels of literacy, the overall quality of responses provided by the agents tend to be poor if they are not predefined. But using only static responses can lead to customer detraction as the customers tend to feel that they are no longer interacting with a human. Hence, it is vital to have variations of the static responses to reduce monotonicity of the responses. However, maintaining a list of such variations can be expensive. Given the conversation context and the agent response, we propose an unsupervised frame-work to generate contextual paraphrases using autoregressive models. We also propose an automated metric based on Semantic Similarity, Textual Entailment, Expression Diversity and Fluency to evaluate the quality of contextual paraphrases and demonstrate performance improvement with Reinforcement Learning (RL) fine-tuning using the automated metric as the reward function.
翻译:通过聊天支持客户需要代理商以最短的等待时间和最高客户满意程度解决客户询问。鉴于代理商和客户的识字程度不同,代理商提供的答复的总体质量如果不预先界定,往往较差。但仅使用静态答复会导致客户减少,因为客户往往认为他们不再与人互动。因此,必须改变静态答复,以减少答复的单一性。然而,保留这种变异清单可能很昂贵。鉴于谈话背景和代理商的答复,我们提议进行未经监督的框架工作,以使用自动递增模式生成背景说明。我们还提议采用基于语义相似性、文本整齐化、表达多样性和流畅度的自动计量,以评价语义的质量,并用自动计量作为奖赏功能,展示加强学习(RL)的微调,以强化学习(RL)来显示业绩的改进。