Information-seeking conversation systems are increasingly popular in real-world applications, especially for e-commerce companies. To retrieve appropriate responses for users, it is necessary to compute the matching degrees between candidate responses and users' queries with historical dialogue utterances. As the contexts are usually much longer than responses, it is thus necessary to expand the responses (usually short) with richer information. Recent studies on pseudo-relevance feedback (PRF) have demonstrated its effectiveness in query expansion for search engines, hence we consider expanding response using PRF information. However, existing PRF approaches are either based on heuristic rules or require heavy manual labeling, which are not suitable for solving our task. To alleviate this problem, we treat the PRF selection for response expansion as a learning task and propose a reinforced learning method that can be trained in an end-to-end manner without any human annotations. More specifically, we propose a reinforced selector to extract useful PRF terms to enhance response candidates and a BERT-based response ranker to rank the PRF-enhanced responses. The performance of the ranker serves as a reward to guide the selector to extract useful PRF terms, which boosts the overall task performance. Extensive experiments on both standard benchmarks and commercial datasets prove the superiority of our reinforced PRF term selector compared with other potential soft or hard selection methods. Both case studies and quantitative analysis show that our model is capable of selecting meaningful PRF terms to expand response candidates and also achieving the best results compared with all baselines on a variety of evaluation metrics. We have also deployed our method on online production in an e-commerce company, which shows a significant improvement over the existing online ranking system.
翻译:寻求信息的谈话系统在现实世界的应用中越来越受欢迎,特别是对于电子商务公司而言。为了检索用户的适当反应,有必要用历史对话的语句来计算候选人答复和用户询问之间的匹配度。由于背景通常比答复时间长得多,因此有必要扩大答复(通常短),而信息更丰富。最近关于假相关性反馈的研究显示,在搜索引擎的查询扩展中,我们考虑利用PRF信息扩大回应范围。然而,现有的PRF方法要么基于超常规则,要么需要大量手工标签,这不适合解决我们的任务。为了缓解这一问题,我们把PRF选择的响应扩展视为学习任务,并提出一种强化的学习方法,可以在没有人文说明的情况下以端对端方式进行培训。更具体地说,我们提议加强选择PRF的有用术语,用基于BERT的模型评分,将PRF的基线评分排在网上评分中。 排名的成绩是用来指导选择精选的精选公司和软性标准, 也是为了提升我们企业的硬性标准, 也是为了展示整个商业选择方法。