We show that the task of question answering (QA) can significantly benefit from the transfer learning of models trained on a different large, fine-grained QA dataset. We achieve the state of the art in two well-studied QA datasets, WikiQA and SemEval-2016 (Task 3A), through a basic transfer learning technique from SQuAD. For WikiQA, our model outperforms the previous best model by more than 8%. We demonstrate that finer supervision provides better guidance for learning lexical and syntactic information than coarser supervision, through quantitative results and visual analysis. We also show that a similar transfer learning procedure achieves the state of the art on an entailment task.
翻译:我们显示,问答任务可以大大受益于在不同的大型精细分类的质量控制数据集中培训模型的转让学习。我们通过SQUA的基本转移学习技术,在两个经过良好研究的质量控制数据集(WikiQA和SemEval-2016 (Task 3A))中实现了最新水平。对于WikiQA来说,我们的模型比先前的最佳模型高出8%以上。我们证明,比粗略的监管,细微的监督为学习词汇和综合信息提供了更好的指导。我们还表明,类似的转让学习程序在承担任务方面达到了最新水平。