Answer selection is a task to choose the positive answers from a pool of candidate answers for a given question. In this paper, we propose a novel strategy for answer selection, called hierarchical ranking. We introduce three levels of ranking: point-level ranking, pair-level ranking, and list-level ranking. They formulate their optimization objectives by employing supervisory information from different perspectives to achieve the same goal of ranking candidate answers. Therefore, the three levels of ranking are related and they can promote each other. We take the well-performed compare-aggregate model as the backbone and explore three schemes to implement the idea of applying the hierarchical rankings jointly: the scheme under the Multi-Task Learning (MTL) strategy, the Ranking Integration (RI) scheme, and the Progressive Ranking Integration (PRI) scheme. Experimental results on two public datasets, WikiQA and TREC-QA, demonstrate that the proposed hierarchical ranking is effective. Our method achieves state-of-the-art (non-BERT) performance on both TREC-QA and WikiQA.
翻译:回答选择是一项任务, 从候选人回答的集合中选择对特定问题的积极答案。 在本文中, 我们提出一个新的答案选择战略, 称为等级排名。 我们引入了三级排名: 点级排名、 双级排名、 列表排名。 它们利用不同视角的监督信息来制定优化目标, 以实现对候选人回答的排序目标。 因此, 三个级别是相互关联的, 可以相互促进。 我们把完善的比较聚合模式作为主干线, 并探索三个方案, 以共同实施等级排序理念: 多任务学习战略下的计划、 分级整合(RI) 计划和 进步排级整合(PRI) 计划。 两个公共数据集( WikiQA 和 TREC- QA ) 的实验结果显示, 拟议的排名是有效的。 我们的方法在TRE- QA 和 WikiQA 上都取得了“ 最新技术(非BERT) ) 业绩。