Recently, much progress in natural language processing has been driven by deep contextualized representations pretrained on large corpora. Typically, the fine-tuning on these pretrained models for a specific downstream task is based on single-view learning, which is however inadequate as a sentence can be interpreted differently from different perspectives. Therefore, in this work, we propose a text-to-text multi-view learning framework by incorporating an additional view -- the text generation view -- into a typical single-view passage ranking model. Empirically, the proposed approach is of help to the ranking performance compared to its single-view counterpart. Ablation studies are also reported in the paper.
翻译:最近,在自然语言处理方面取得很大进展的动力是大型公司事先经过深层次背景化的表述,通常,对这些经过预先培训的模式进行微调,以进行具体的下游任务,其基础是单一观点学习,然而,这种学习不够充分,因为从不同角度来解释一个句子可以有所不同,因此,在这项工作中,我们提出一个文本到文本的多观点学习框架,将另外一种观点 -- -- 文本生成观点 -- -- 纳入一个典型的单一观点的段落排名模式。